Trang chủ‎ > ‎IT‎ > ‎Data Mining‎ > ‎Time Series Analysis‎ > ‎ARIMA, AR, MA ...‎ > ‎

# ARIMA modelling in R

## How does auto.arima() work ?

The auto.arima() function in R uses a variation of the Hyndman and Khandakar algorithm which combines unit root tests, minimization of the AICc and MLE to obtain an ARIMA model. The algorithm follows these steps.

#### Hyndman-Khandakar algorithm for automatic ARIMA modelling

1. The number of differences dd is determined using repeated KPSS tests.

2. The values of pp and qq are then chosen by minimizing the AICc after differencing the data dd times. Rather than considering every possible combination of pp and qq, the algorithm uses a stepwise search to traverse the model space.

(a) The best model (with smallest AICc) is selected from the following four:

ARIMA(2,d,2),
ARIMA(0,d,0),
ARIMA(1,d,0),
ARIMA(0,d,1).

If d=0d=0 then the constant cc is included; if d1d≥1 then the constant cc is set to zero. This is called the "current model".

(b) Variations on the current model are considered:

• vary pp and/or qq from the current model by ±1±1;
• include/exclude cc from the current model.

The best model considered so far (either the current model, or one of these variations) becomes the new current model.

(c) Repeat Step 2(b) until no lower AICc can be found.

If you want to choose the model yourself, use the Arima() function in R. For example, to fit the ARIMA(0,0,3) model to the US consumption data, the following commands can be used.

R code
fit <- Arima(usconsumption[,1]order=c(0,0,3))

There is another function arima() in R which also fits an ARIMA model. However, it does not allow for the constant cc unless d=0d=0, and it does not return everything required for the forecast() function. Finally, it does not allow the estimated model to be applied to new data (which is useful for checking forecast accuracy). Consequently, it is recommended that you use Arima()instead.

## Modelling procedure

When fitting an ARIMA model to a set of time series data, the following procedure provides a useful general approach.

1. Plot the data. Identify any unusual observations.
2. If necessary, transform the data (using a Box-Cox transformation) to stabilize the variance.
3. If the data are non-stationary: take first differences of the data until the data are stationary.
4. Examine the ACF/PACF: Is an AR(pp) or MA(qq) model appropriate?
5. Try your chosen model(s), and use the AICc to search for a better model.
6. Check the residuals from your chosen model by plotting the ACF of the residuals, and doing a portmanteau test of the residuals. If they do not look like white noise, try a modified model.
7. Once the residuals look like white noise, calculate forecasts.

The automated algorithm only takes care of steps 3–5. So even if you use it, you will still need to take care of the other steps yourself.

The process is summarized in Figure 8.10.

## Example 8.2 Seasonally adjusted electrical equipment orders

We will apply this procedure to the seasonally adjusted electrical equipment orders data shown in Figure 8.11. Figure 8.11: Seasonally adjusted electrical equipment orders index in the Euro area.
R code
1. The time plot shows some sudden changes, particularly the big drop in 2008/2009. These changes are due to the global economic environment. Otherwise there is nothing unusual about the time plot and there appears to be no need to do any data adjustments.
2. There is no evidence of changing variance, so we will not do a Box-Cox transformation.
3. The data are clearly non-stationary as the series wanders up and down for long periods. Consequently, we will take a first difference of the data. The differenced data are shown in Figure 8.12. These look stationary, and so we will not consider further differences. Figure 8.12: Time plot and ACF and PACF plots for differenced seasonally adjusted electrical equipment data.
R code
4. The PACF shown in Figure 8.12 is suggestive of an AR(3) model. So an initial candidate model is an ARIMA(3,1,0). There are no other obvious candidate models.
5. We fit an ARIMA(3,1,0) model along with variations including ARIMA(4,1,0), ARIMA(2,1,0), ARIMA(3,1,1), etc. Of these, the ARIMA(3,1,1) has a slightly smaller AICc value.
R output
> summary(fit)
ARIMA(3,1,1)

Coefficients:
ar1     ar2     ar3      ma1
0.0519  0.1191  0.3730  -0.4542
s.e.  0.1840  0.0888  0.0679   0.1993

sigma^2 estimated as 9.532:  log likelihood=-484.08
AIC=978.17   AICc=978.49   BIC=994.4
6. The ACF plot of the residuals from the ARIMA(3,1,1) model shows all correlations within the threshold limits indicating that the residuals are behaving like white noise. A portmanteau test returns a large p-value, also suggesting the residuals are white noise.
R code
Acf(residuals(fit))
Box.test(residuals(fit)lag=24, fitdf=4, type="Ljung")
7. Forecasts from the chosen model are shown in Figure 8.13.
R code
plot(forecast(fit))

If we had used the automated algorithm instead, we would have obtained exactly the same model in this example.

## Understanding constants in R

A non-seasonal ARIMA model can be written as

(1ϕ1BϕpBp)(1B)dyt=c+(1+θ1B++θqBq)et(8.2)(8.2)(1−ϕ1B−⋯−ϕpBp)(1−B)dyt=c+(1+θ1B+⋯+θqBq)et

or equivalently as

(1ϕ1BϕpBp)(1B)d(ytμtd/d!)=(1+θ1B++θqBq)et,(8.3)(8.3)(1−ϕ1B−⋯−ϕpBp)(1−B)d(yt−μtd/d!)=(1+θ1B+⋯+θqBq)et,

where c=μ(1ϕ1ϕp)c=μ(1−ϕ1−⋯−ϕp) and μμ is the mean of (1B)dyt(1−B)dyt. R uses the parametrization of equation (8.38.3). Thus, the inclusion of a constant in a non-stationary ARIMA model is equivalent to inducing a polynomial trend of order ddin the forecast function. (If the constant is omitted, the forecast function includes a polynomial trend of order d1d−1.) When d=0d=0, we have the special case that μμis the mean of ytyt.

### arima()

By default, the arima() command in R sets c=μ=0c=μ=0 when d>0d>0 and provides an estimate of μμ when d=0d=0. The parameter μμ is called the “intercept” in the R output. It will be close to the sample mean of the time series, but usually not identical to it as the sample mean is not the maximum likelihood estimate when p+q>0p+q>0. The arima() command has an argument include.mean which only has an effect when d=0d=0 and is TRUE by default. Setting include.mean=FALSE will force μ=0μ=0.

### Arima()

The Arima() command from the forecast package provides more flexibility on the inclusion of a constant. It has an argument include.mean which has identical functionality to the corresponding argument for arima(). It also has an argument include.drift which allows μ0μ≠0 when d=1d=1. For d>1d>1, no constant is allowed as a quadratic or higher order trend is particularly dangerous when forecasting. The parameter μμ is called the “drift” in the R output when d=1d=1.

There is also an argument include.constant which, if TRUE, will set include.mean=TRUE if d=0d=0 and include.drift=TRUE when d=1d=1. If include.constant=FALSE, both include.mean and include.drift will be set to FALSE. If include.constant is used, the values of include.mean=TRUE andinclude.drift=TRUE are ignored.

### auto.arima()

The auto.arima() function automates the inclusion of a constant. By default, for d=0d=0 or d=1d=1, a constant will be included if it improves the AIC value; for d>1d>1 the constant is always omitted. If allowdrift=FALSE is specified, then the constant is only allowed when d=0d=0.

# Forecasting

## Point forecasts

Although we have calculated forecasts from the ARIMA models in our examples, we have not yet explained how they are obtained. Point forecasts can be calculated using the following three steps.

1. Expand the ARIMA equation so that ytyt is on the left hand side and all other terms are on the right.
2. Rewrite the equation by replacing tt by T+hT+h.
3. On the right hand side of the equation, replace future observations by their forecasts, future errors by zero, and past errors by the corresponding residuals.

Beginning with h=1h=1, these steps are then repeated for h=2,3,h=2,3,… until all forecasts have been calculated.

The procedure is most easily understood via an example. We will illustrate it using the ARIMA(3,1,1) model fitted in the previous section. The model can be written as follows

(1ϕ1Bϕ2B2ϕ3B3)(1B)yt=(1+θ1B)et,(1−ϕ1B−ϕ2B2−ϕ3B3)(1−B)yt=(1+θ1B)et,

where ϕ1=0.0519ϕ1=0.0519ϕ2=0.1191ϕ2=0.1191ϕ3=0.3730ϕ3=0.3730 and θ1=0.4542θ1=−0.4542.

Then we expand the left hand side to obtain

[1(1+ϕ1)B+(ϕ1ϕ2)B2+(ϕ2ϕ3)B3+ϕ3B4]yt=(1+θ1B)et,[1−(1+ϕ1)B+(ϕ1−ϕ2)B2+(ϕ2−ϕ3)B3+ϕ3B4]yt=(1+θ1B)et,

and applying the backshift operator gives

yt(1+ϕ1)yt1+(ϕ1ϕ2)yt2+(ϕ2ϕ3)yt3+ϕ3yt4=et+θ1et1.yt−(1+ϕ1)yt−1+(ϕ1−ϕ2)yt−2+(ϕ2−ϕ3)yt−3+ϕ3yt−4=et+θ1et−1.

Finally, we move all terms other than ytyt to the right hand side:

yt=(1+ϕ1)yt1(ϕ1ϕ2)yt2(ϕ2ϕ3)yt3ϕ3yt4+et+θ1et1.(8.2)(8.2)yt=(1+ϕ1)yt−1−(ϕ1−ϕ2)yt−2−(ϕ2−ϕ3)yt−3−ϕ3yt−4+et+θ1et−1.

This completes the first step. While the equation now looks like an ARIMA(4,0,1), it is still the same ARIMA(3,1,1) model we started with. It cannot be considered an ARIMA(4,0,1) because the coefficients do not satisfy the stationarity conditions.

For the second step, we replace tt by T+1T+1 in (8.28.2):

yT+1=(1+ϕ1)yT(ϕ1ϕ2)yT1(ϕ2ϕ3)yT2ϕ3yT3+eT+1+θ1eT.yT+1=(1+ϕ1)yT−(ϕ1−ϕ2)yT−1−(ϕ2−ϕ3)yT−2−ϕ3yT−3+eT+1+θ1eT.

Assuming we have observations up to time TT, all values on the right hand side are known except for eT+1eT+1 which we replace by zero and eTeT which we replace by the last observed residual e^Te^T:

y^T+1|T=(1+ϕ1)yT(ϕ1ϕ2)yT1(ϕ2ϕ3)yT2ϕ3yT3+θ1e^T.y^T+1|T=(1+ϕ1)yT−(ϕ1−ϕ2)yT−1−(ϕ2−ϕ3)yT−2−ϕ3yT−3+θ1e^T.

A forecast of yT+2yT+2 is obtained by replacing tt by T+2T+2 in (8.28.2). All values on the right hand side will be known at time TT except yT+1yT+1 which we replace by y^T+1|Ty^T+1|T, and eT+2eT+2 and eT+1eT+1, both of which we replace by zero:

y^T+2|T=(1+ϕ1)y^T+1|T(ϕ1ϕ2)yT(ϕ2ϕ3)yT1ϕ3yT2.y^T+2|T=(1+ϕ1)y^T+1|T−(ϕ1−ϕ2)yT−(ϕ2−ϕ3)yT−1−ϕ3yT−2.

The process continues in this manner for all future time periods. In this way, any number of point forecasts can be obtained.

## Forecast intervals

The calculation of ARIMA forecast intervals is much more difficult, and the details are largely beyond the scope of this book. We will just give some simple examples.

The first forecast interval is easily calculated. If σ^σ^ is the standard deviation of the residuals, then a 95% forecast interval is given by y^T+1|T±1.96σ^y^T+1|T±1.96σ^. This result is true for all ARIMA models regardless of their parameters and orders.

Multi-step forecast intervals for ARIMA(0,0,q0,0,q) models are relatively easy to calculate. We can write the model as

yt=et+i=1qθieti.yt=et+∑i=1qθiet−i.

Then the estimated forecast variance can be written as

vT+h|T=σ^2[1+i=1h1θ2i],for h=2,3,,vT+h|T=σ^2[1+∑i=1h−1θi2],for h=2,3,…,

and a 95% forecast interval is given by y^T+h|T±1.96vT+h|Ty^T+h|T±1.96vT+h|T.

In Section 8/4, we showed that an AR(1) model can be written as an MA() model. Using this equivalence, the above result for MA(qq) models can also be used to obtain forecast intervals for AR(1) models.

More general results, and other special cases of multi-step forecast intervals for an ARIMA(p,d,qp,d,q) model, are given in more advanced textbooks such asBrockwell and Davis (1991, Section 9.5).

The forecast intervals for ARIMA models are based on assumptions that the residuals are uncorrelated and normally distributed. If either of these are assumptions do not hold, then the forecast intervals may be incorrect. For this reason, always plot the ACF and histogram of the residuals to check the assumptions before producing forecast intervals.

In general, forecast intervals from ARIMA models will increase as the forecast horizon increases. For stationary models (i.e., with d=0d=0), they will converge so forecast intervals for long horizons are all essentially the same. For d>1d>1, the forecast intervals will continue to grow into the future.

As with most forecast interval calculations, ARIMA-based intervals tend to be too narrow. This occurs because only the variation in the errors has been accounted for. There is also variation in the parameter estimates, and in the model order, that has not been included in the calculation. In addition, the calculation assumes that the historical patterns that have been modelled will continue into the forecast period.