I'm comparing some forecasting methods using four accuracy measures: Mean Absolute Error (MAE), Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), Mean Absolute Scaled Error (MASE). The results are contradictory according to these different measures; which method finally should be selected? Is there any solution in this case?
Solved – Which forecasting method should be selected in case of contradictory results from different accuracy measures
accuracyforecastingtime series
Related Solutions
MSE is scale-dependent, MAPE is not. So if you are comparing accuracy across time series with different scales, you can't use MSE.
For business use, MAPE is often preferred because apparently managers understand percentages better than squared errors.
MAPE can't be used when percentages make no sense. For example, the Fahrenheit and Celsius temperature scales have relatively arbitrary zero points, and it makes no sense to talk about percentages. MAPE also cannot be used when the time series can take zero values.
MASE is intended to be both independent of scale and usable on all scales.
As @Dmitrij said, the accuracy()
function in the forecast
package for R is an easy way to compute these and other accuracy measures.
There is a lot more about forecast accuracy measures in my 2006 IJF paper with Anne Koehler.
In the linked blog post, Rob Hyndman calls for entries to a tourism forecasting competition. Essentially, the blog post serves to draw attention to the relevant IJF article, an ungated version of which is linked to in the blog post.
The benchmarks you refer to - 1.38 for monthly, 1.43 for quarterly and 2.28 for yearly data - were apparently arrived at as follows. The authors (all of them are expert forecasters and very active in the IIF - no snake oil salesmen here) are quite capable of applying standard forecasting algorithms or forecasting software, and they are probably not interested in simple ARIMA submission. So they went and applied some standard methods to their data. For the winning submission to be invited for a paper in the IJF, they ask that it improve on the best of these standard methods, as measured by the MASE.
So your question essentially boils down to:
Given that a MASE of 1 corresponds to a forecast that is out-of-sample as good (by MAD) as the naive random walk forecast in-sample, why can't standard forecasting methods like ARIMA improve on 1.38 for monthly data?
Here, the 1.38 MASE comes from Table 4 in the ungated version. It is the average ASE over 1-24 month ahead forecasts from ARIMA. The other standard methods, like ForecastPro, ETS etc. perform even worse.
And here, the answer gets hard. It is always very problematic to judge forecast accuracy without considering the data. One possibility I could think of in this particular case could be accelerating trends. Suppose that you try to forecast $\exp(t)$ with standard methods. None of these will capture the accelerating trend (and this is usually a Good Thing - if your forecasting algorithm often models an accelerating trend, you will likely far overshoot your mark), and they will yield a MASE that is above 1. Other explanations could, as you say, be different structural breaks, e.g., level shifts or external influences like SARS or 9/11, which would not be captured by the non-causal benchmark models, but which could be modeled by dedicated tourism forecasting methods (although using future causals in a holdout sample is a kind of cheating).
So I'd say that you likely can't say a lot about this withough looking at the data themselves. They are available on Kaggle. Your best bet is likely to take these 518 series, hold out the last 24 months, fit ARIMA series, calculate MASEs, dig out the ten or twenty MASE-worst forecast series, get a big pot of coffee, look at these series and try to figure out what it is that makes ARIMA models so bad at forecasting them.
EDIT: another point that appears obvious after the fact but took me five days to see - remember that the denominator of the MASE is the one-step ahead in-sample random walk forecast, whereas the numerator is the average of the 1-24-step ahead forecasts. It's not too surprising that forecasts deteriorate with increasing horizons, so this may be another reason for a MASE of 1.38. Note that the Seasonal Naive forecast was also included in the benchmark and had an even higher MASE.
Best Answer
Disagreement among these measures is actually a natural thing, as they target different objectives. Suppose you'd know the true probability distribution of the random variable (call it $Y$) of interest. Then, in order to minimize the MSE, you'd state the mean of $Y$ as a forecast. In order to minimize the MAE, however, you'd state the median of $Y$, which is different from the mean if the distribution of $Y$ is skewed.
Hence it is easily possible that method A gives better forecasts of the mean, whereas method B is better for the median, which makes the measures disagree. In order to choose an accuracy measure, you should think about which concept (mean vs median vs ...) you're interested in.
PS: MAPE and MASE seem to target more exotic objectives which are less popular than the mean and median. See http://arxiv.org/pdf/0912.0902.pdf for details on this.