statrefs home‎ > ‎Main‎ > ‎Methods‎ > ‎Time Series‎ > ‎Time Series Supporting Concepts‎ > ‎

Forecast Model Evaluation

When a time series model is selected, one common use of the model is to forecast future values.
  • While minimizing within-sample error should be considered, it is usually not sufficient.  We want to know how well the model forecasts future values. The concern may be less about model goodness-of-fit within the domain of the data used to create the time series model (such as with linear regression) which is not necessarily a good indicator of the reliability of forecasts generated by the time series model.
  • Several metrics for forecast accuracy are based on one-step-ahead forecast errors. This is sometimes also referred to as out-of-sample forecast error.


The one-step-ahead forecast error et(1) is the difference between y.hatt(t-1) [the forecast value of yt created at yt-1] and the actual value of yt.  As each new observation is added to the time series, a new one-step-ahead forecast is created for the next point in the time series.
  • et(1) = y.hatt(t-1) - yt

When fitting a time series model, it may be helpful to split the data into two parts.  The first part would be used to fit the time series model.  The second part would be used to evaluate forecast performance.
Common terms include:  data splitting; cross-validation

There are several measures of forecast accuracy that fall into two categories:
  • Scale-dependent measures
  • Relative forecast error

Scale-dependent measures

ME - Mean Error
  • An estimate of the expected value of the forecast errors
  • ME = (1/n)*sum[et(1)]

MSE - Mean Squared Error
  • Estimates the variance of the one-step-ahead forecast errors
  • MSE = (1/n)*sum[et(1)2]

MAD - Mean Absolute Deviation
  • Measures the variability of the forecast errors
  • MAD = (1/n)*sum[abs(et(1))]

Relative forecast error measures

Relative forecast error (also called "percent forecast error")
  • one-step-ahead relative error ret(1) is based on the errors of the one-step-ahead forecasts
  • ret(1) = 100*[ et(1) / yt ]

MPE - Mean Percent (forecast) Error
  • MPE = (1/n)*sum[ret(1)]

MAPE - Mean Absolute Percent (forecast) Error
  • MAPE = (1/n)*sum[abs(ret(1))]


  • The relative forecast error measures are less useful when the time series contains values equal to zero.
  • Relative forecast error metrics allow comparisons of forecasting methods across different time series or time periods.
  • The R 'forecast' package offers the function "accuracy()" which calculates these metrics.

Hold-out Sample

When checking a time series model for forecast effectiveness, it is common to split the data into two parts.  The time series model is fit using the first part of the data.  Forecast effectiveness is assessed by applying the time series model to the latter part of the data - the hold-out sample.

For example, if sales data for the past four years was available, a Holt-Winters' model might be fit using the first three years, and then checking the model's forecast for the fourth year of data against the hold-out sample of actual year-four data.