Choosing the Right Forecast Comparison Measure in a Best Fit Forecast FOR SAP IBP

mbheader.jpg

Once you have decided to run a best-fit forecast in IBP, the next step after choosing the different forecasting methods you want to test is choosing what measure they will be tested against. The way that SAP IBP chooses the proper forecast for your data is by measuring the error between your actual historical data and each forecast’s ex-post values. Ex-post values are what the forecast would have predicted past values to be.


Above is an example inside the IBP Excel interface with mock data; in the Calendar Weeks prior to 2019 CW37, we are able to compare the Actual Quantity of sales to the chosen statistical forecast’s Ex-Post Forecast Quantities, or what the forecast would have predicted those past sales to be. In the table below the chart, the “Residuals FC-DEL” field measures the difference between the Ex-Post Forecast Quantity and the Actuals Quantity. The forecast we see here (in this case a triple exponential smoothing forecast) is displayed because the system tested each forecast I selected against my historical data and chose the one with the lowest errors. With many different forecast measuring options in IBP, how do I know what they all mean and which one to choose?

Error Measures are more fun when you know what they mean

In the example above, I chose MAPE as my measure for forecast comparison, but it can be tricky to choose the right forecast comparison measure for you, especially because sometimes online explanations of these forecast measures simply show this:


Here are some more useful error definitions:

Mean Percentage Error (MPE): like MAPE, it is the average of the percentage errors (take the actual value and the forecasted value, calculate what percentage the ex-post forecast data point is off by). This error measure uses actual percentages as opposed to absolute ones, so it factors in positive and negative percentages that can offset each other. Therefore, it can also be a measure of bias in a forecast, or whether the forecast typically overestimates or underestimates values overall.

Mean Absolute Percentage Error (MAPE): one of the most widely used measures of forecast accuracy. It measures the (absolute) size of each error in percentage terms, then averages all percentages. MAPE and MPE are typically not ideal for low-volume data, as being off by a few units can skew the final percentage results significantly, and also because the formula divides by the actuals quantity, so an actual demand of zero means that MAPE cannot process it properly. MAPE’s output can be interpreted as such: if the MAPE is 4%, you can say that you were off by 4%.

Mean Square Error (MSE): measures the average squared difference between the forecasted and actual values. This tells you how close you are to getting the most accurate “line of best fit,” and the higher the value, the worse the line fits. MSE can be skewed if just 1 forecast value is Very Bad because all errors are squared. This same effect can make this error measure problematic if there is a lot of noisy data.

Root of the Mean Square Error (RMSE): this simply takes the square root of the MSE. The purpose of originally squaring the errors was done so that negative errors did not cancel out positive errors. Square rooting them after the fact makes it so that the values of the RMSE have the same units as those plotted on the vertical axis (the line between the forecasted value and the actual value), thus the results can be interpreted as the absolute distance between the line of best fit and the data point.

Mean Absolute Deviation (MAD): the average absolute distance between each data point and the mean; in other words, it measures the size of the error in units. MAD is good for measuring the error of a single item, but when aggregated across multiple items it should be used cautiously, as it may be skewed due to high-volume data dominating the numbers and obscuring important information about lower-volume items.

Error Total (ET): simply a summation of the differences between the actual values and the forecasted values.

Mean Absolute Scaled Error (MASE): this is a ratio between the average absolute errors of the given forecasting method and the average absolute errors of something known as a “naïve” forecast, in which you take the last period’s actuals and use them as the next period’s forecast. A MASE of more than 1 suggests that the forecasting method does a worse job of predicting values than if you had just recycled the exact same values of the prior period.

Weighted Mean Absolute Percentage Error (WMAPE): WMAPE is similar to MAPE, but it prevents lower-volume items from being considered equal to higher-demand items by weighing them differently.


I find myself using MAPE most often, but which forecast comparison error you choose depends on your own priorities. With that, happy forecasting!

Previous
Previous

Is SAP IBP Ready to Replace My APO Planning System?

Next
Next

Simplify Your Demand Planning Process with Pyramid Forecasting in SAP IBP