Trang chủ‎ > ‎IT‎ > ‎Data Mining‎ > ‎Time Series Analysis‎ > ‎ARIMA, AR, MA ...‎ > ‎

# Seasonal ARIMA models

So far, we have restricted our attention to non-seasonal data and non-seasonal ARIMA models. However, ARIMA models are also capable of modelling a wide range of seasonal data. A seasonal ARIMA model is formed by including additional seasonal terms in the ARIMA models we have seen so far. It is written as follows:

where m=m= number of periods per season. We use uppercase notation for the seasonal parts of the model, and lowercase notation for the non-seasonal parts of the model.

The seasonal part of the model consists of terms that are very similar to the non-seasonal components of the model, but they involve backshifts of the seasonal period. For example, an ARIMA(1,1,1)(1,1,1)4 model (without a constant) is for quarterly data (m=4m=4) and can be written as

The additional seasonal terms are simply multiplied with the non-seasonal terms.

## ACF/PACF

The seasonal part of an AR or MA model will be seen in the seasonal lags of the PACF and ACF. For example, an ARIMA(0,0,0)(0,0,1)12 model will show:

• a spike at lag 12 in the ACF but no other significant spikes.
• The PACF will show exponential decay in the seasonal lags; that is, at lags 12, 24, 36, ….

Similarly, an ARIMA(0,0,0)(1,0,0)12 model will show:

• exponential decay in the seasonal lags of the ACF
• a single significant spike at lag 12 in the PACF.

In considering the appropriate seasonal orders for an ARIMA model, restrict attention to the seasonal lags. The modelling procedure is almost the same as for non-seasonal data, except that we need to select seasonal AR and MA terms as well as the non-seasonal components of the model. The process is best illustrated via examples.

## Example 8.3 European quarterly retail trade

We will describe the seasonal ARIMA modelling procedure using quarterly European retail trade data from 1996 to 2011. The data are plotted in Figure 8.14.

Figure 8.14: Quarterly retail trade index in the Euro area (17 countries), 1996–2011, covering wholesale and retail trade, and repair of motor vehicles and motorcycles. (Index: 2005 = 100).
R code
plot(euretail, ylab="Retail index", xlab="Year")

The data are clearly non-stationary, with some seasonality, so we will first take a seasonal difference. The seasonally differenced data are shown in Figure 8.15. These also appear to be non-stationary, and so we take an additional first difference, shown in Figure 8.16.

Figure 8.15: Seasonally differenced European retail trade index.
R code
tsdisplay(diff(euretail,4))
Figure 8.16: Double differenced European retail trade index.
R code
tsdisplay(diff(diff(euretail,4)))

Our aim now is to find an appropriate ARIMA model based on the ACF and PACF shown in Figure 8.16. The significant spike at lag 1 in the ACF suggests a non-seasonal MA(1) component, and the significant spike at lag 4 in the ACF suggests a seasonal MA(1) component. Consequently, we begin with an ARIMA(0,1,1)(0,1,1)44 model, indicating a first and seasonal difference, and non-seasonal and seasonal MA(1) components. The residuals for the fitted model are shown in Figure 8.17. (By analogous logic, we could also have started with an ARIMA(1,1,0)(1,1,0)44 model.)

Figure 8.17: Residuals from the fitted ARIMA(0,1,1)(0,1,1)4 model for the European retail trade index data.
R code
fit <- Arima(euretail, order=c(0,1,1), seasonal=c(0,1,1))
tsdisplay(residuals(fit))

Both the ACF and PACF show significant spikes at lag 2, and almost significant spikes at lag 3, indicating some additional non-seasonal terms need to be included in the model. The AICc of the ARIMA(0,1,2)(0,1,1)44 model is 74.36, while that for the ARIMA(0,1,3)(0,1,1)44 model is 68.53. We tried other models with AR terms as well, but none that gave a smaller AICc value. Consequently, we choose the ARIMA(0,1,3)(0,1,1)44 model. Its residuals are plotted in Figure 8.18. All the spikes are now within the significance limits, and so the residuals appear to be white noise. A Ljung-Box test also shows that the residuals have no remaining autocorrelations.

Figure 8.18: Residuals from the fitted ARIMA(0,1,3)(0,1,1)4 model for the European retail trade index data.
R code
fit3 <- Arima(euretail, order=c(0,1,3), seasonal=c(0,1,1))
res <- residuals(fit3)
tsdisplay(res)
Box.test(res, lag=16, fitdf=4, type="Ljung")

So we now have a seasonal ARIMA model that passes the required checks and is ready for forecasting. Forecasts from the model for the next three years are shown in Figure 8.19. Notice how the forecasts follow the recent trend in the data (this occurs because of the double differencing). The large and rapidly increasing prediction intervals show that the retail trade index could start increasing or decreasing at any time — while the point forecasts trend downwards, the prediction intervals allow for the data to trend upwards during the forecast period.

Figure 8.19: Forecasts of the European retail trade index data using the ARIMA(0,1,3)(0,1,1)4 model. 80% and 95% prediction intervals are shown.
R code
plot(forecast(fit3, h=12))

We could have used auto.arima() to do most of this work for us. It would have given the following result.

R output
> auto.arima(euretail)
ARIMA(1,1,1)(0,1,1)[4]

Coefficients:
ar1      ma1     sma1
0.8828  -0.5208  -0.9704
s.e.  0.1424   0.1755   0.6792

sigma^2 estimated as 0.1411:  log likelihood=-30.19
AIC=68.37   AICc=69.11   BIC=76.68

Notice that it has selected a different model (with a larger AICc value). auto.arima() takes some short-cuts in order to speed up the computation and will not always give the best model. You can turn the short-cuts off and then it will sometimes return a different model.

R output
> auto.arima(euretail, stepwise=FALSE, approximation=FALSE)
ARIMA(0,1,3)(0,1,1)[4]

Coefficients:
ma1     ma2     ma3     sma1
0.2625  0.3697  0.4194  -0.6615
s.e.  0.1239  0.1260  0.1296   0.1555

sigma^2 estimated as 0.1451:  log likelihood=-28.7
AIC=67.4   AICc=68.53   BIC=77.78

This time it returned the same model we had identified.

## Example 8.4 Cortecosteroid drug sales in Australia

Our second example is more difficult. We will try to forecast monthly cortecosteroid drug sales in Australia. These are known as H02 drugs under the Anatomical Therapeutical Chemical classification scheme.

Figure 8.20: Cortecosteroid drug sales in Australia (in millions of scripts per month). Logged data shown in bottom panel.
R code
lh02 <- log(h02)
par(mfrow=c(2,1))
plot(h02, ylab="H02 sales (million scripts)", xlab="Year")
plot(lh02, ylab="Log H02 sales", xlab="Year")

Data from July 1991 to June 2008 are plotted in Figure 8.20. There is a small increase in the variance with the level, and so we take logarithms to stabilize the variance.

The data are strongly seasonal and obviously non-stationary, and so seasonal differencing will be used. The seasonally differenced data are shown in Figure 8.21. It is not clear at this point whether we should do another difference or not. We decide not to, but the choice is not obvious.

The last few observations appear to be different (more variable) from the earlier data. This may be due to the fact that data are sometimes revised as earlier sales are reported late.

Figure 8.21: Seasonally differenced cortecosteroid drug sales in Australia (in millions of scripts per month).
R code
tsdisplay(diff(lh02,12)
main="Seasonally differenced H02 scripts", xlab="Year")

In the plots of the seasonally differenced data, there are spikes in the PACF at lags 12 and 24, but nothing at seasonal lags in the ACF. This may be suggestive of a seasonal AR(2) term. In the non-seasonal lags, there are three significant spikes in the PACF suggesting a possible AR(3) term. The pattern in the ACF is not indicative of any simple model.

Consequently, this initial analysis suggests that a possible model for these data is an ARIMA(3,0,0)(2,1,0)12. We fit this model, along with some variations on it, and compute their AICc values which are shown in the following table.

ModelAICc
ARIMA(3,0,0)(2,1,0)12-475.12
ARIMA(3,0,1)(2,1,0)12-476.31
ARIMA(3,0,2)(2,1,0)12-474.88
ARIMA(3,0,1)(1,1,0)12-463.40
ARIMA(3,0,1)(0,1,1)12-483.67
ARIMA(3,0,1)(0,1,2)12-485.48
ARIMA(3,0,1)(1,1,1)12-484.25

Of these models, the best is the ARIMA(3,0,1)(0,1,2)12 model (i.e., it has the smallest AICc value).

R output
> fit <- Arima(h02, order=c(3,0,1), seasonal=c(0,1,2), lambda=0)

ARIMA(3,0,1)(0,1,2)[12]
Box Cox transformation: lambda= 0

Coefficients:
ar1     ar2     ar3     ma1     sma1     sma2
-0.1603  0.5481  0.5678  0.3827  -0.5222  -0.1768
s.e.   0.1636  0.0878  0.0942  0.1895   0.0861   0.0872

sigma^2 estimated as 0.004145:  log likelihood=250.04
AIC=-486.08   AICc=-485.48   BIC=-463.28

The residuals from this model are shown in Figure 8.22. There are significant spikes in both the ACF and PACF, and the model fails a Ljung-Box test. The model can still be used for forecasting, but the prediction intervals may not be accurate due to the correlated residuals.

Figure 8.22: Residuals from the ARIMA(3,0,1)(0,1,2)12 model applied to the H02 monthly script sales data.
R code
tsdisplay(residuals(fit))
Box.test(residuals(fit)lag=36, fitdf=6, type="Ljung")

Next we will try using the automatic ARIMA algorithm. Running auto.arima()with arguments left at their default values led to an ARIMA(2,1,3)(0,1,1)12 model. However, the model still fails a Ljung-Box test. Sometimes it is just not possible to find a model that passes all the tests.

Finally, we tried running auto.arima() with differencing specified to be d=0d=0and D=1D=1, and allowing larger models than usual. This led to an ARIMA(4,0,3)(0,1,1)12 model, which did pass all the tests.

R code
fit <- auto.arima(h02, lambda=0, d=0D=1, max.order=9,
stepwise=FALSE, approximation=FALSE)
tsdisplay(residuals(fit))
Box.test(residuals(fit)lag=36, fitdf=8, type="Ljung")

### Test set evaluation:

We will compare some of the models fitted so far using a test set consisting of the last two years of data. Thus, we fit the models using data from July 1991 to June 2006, and forecast the script sales for July 2006 – June 2008. The results are summarised in the following table.

ModelRMSE
ARIMA(3,0,0)(2,1,0)120.0661
ARIMA(3,0,1)(2,1,0)120.0646
ARIMA(3,0,2)(2,1,0)120.0645
ARIMA(3,0,1)(1,1,0)120.0679
ARIMA(3,0,1)(0,1,1)120.0644
ARIMA(3,0,1)(0,1,2)120.0622
ARIMA(3,0,1)(1,1,1)120.0630
ARIMA(4,0,3)(0,1,1)120.0648
ARIMA(3,0,3)(0,1,1)120.0640
ARIMA(4,0,2)(0,1,1)120.0648
ARIMA(3,0,2)(0,1,1)120.0644
ARIMA(2,1,3)(0,1,1)120.0634
ARIMA(2,1,4)(0,1,1)120.0632
ARIMA(2,1,5)(0,1,1)120.0640
R code
getrmse <- function(x,h,...)
{
train.end <- time(x)[length(x)-h]
test.start <- time(x)[length(x)-h+1]
train <- window(x,end=train.end)
test <- window(x,start=test.start)
fit <- Arima(train,...)
fc <- forecast(fit,h=h)
return(accuracy(fc,test)[2,"RMSE"])
}

getrmse(h02,h=24,order=c(3,0,0),seasonal=c(2,1,0),lambda=0)
getrmse(h02,h=24,order=c(3,0,1),seasonal=c(2,1,0),lambda=0)
getrmse(h02,h=24,order=c(3,0,2),seasonal=c(2,1,0),lambda=0)
getrmse(h02,h=24,order=c(3,0,1),seasonal=c(1,1,0),lambda=0)
getrmse(h02,h=24,order=c(3,0,1),seasonal=c(0,1,1),lambda=0)
getrmse(h02,h=24,order=c(3,0,1),seasonal=c(0,1,2),lambda=0)
getrmse(h02,h=24,order=c(3,0,1),seasonal=c(1,1,1),lambda=0)
getrmse(h02,h=24,order=c(4,0,3),seasonal=c(0,1,1),lambda=0)
getrmse(h02,h=24,order=c(3,0,3),seasonal=c(0,1,1),lambda=0)
getrmse(h02,h=24,order=c(4,0,2),seasonal=c(0,1,1),lambda=0)
getrmse(h02,h=24,order=c(3,0,2),seasonal=c(0,1,1),lambda=0)
getrmse(h02,h=24,order=c(2,1,3),seasonal=c(0,1,1),lambda=0)
getrmse(h02,h=24,order=c(2,1,4),seasonal=c(0,1,1),lambda=0)
getrmse(h02,h=24,order=c(2,1,5),seasonal=c(0,1,1),lambda=0)

The models that have the lowest AICc values tend to give slightly better results than the other models, but there is not a large difference. Also, the only model that passed the residual tests did not give the best out-of-sample RMSE values.

When models are compared using AICc values, it is important that all models have the same orders of differencing. However, when comparing models using a test set, it does not matter how the forecasts were produced --- the comparisons are always valid. Consequently, in the table above, we can include some models with only seasonal differencing and some models with both first and seasonal differencing. But in the earlier table containing AICc values, we compared models with only seasonal differencing.

None of the models considered here pass all the residual tests. In practice, we would normally use the best model we could find, even if it did not pass all tests.

Forecasts from the ARIMA(3,0,1)(0,1,2)12 model (which has the lowest RMSE value on the test set, and the best AICc value amongst models with only seasonal differencing and fewer than six parameters) are shown in the figure below.

Figure 8.23: Forecasts from the ARIMA(3,0,1)(0,1,2)12 model applied to the H02 monthly script sales data.
R code
fit <- Arima(h02, order=c(3,0,1), seasonal=c(0,1,2), lambda=0)
plot(forecast(fit), ylab="H02 sales (million scripts)", xlab="Year")

# ARIMA vs ETS

It is a common myth that ARIMA models are more general than exponential smoothing. While linear exponential smoothing models are all special cases of ARIMA models, the non-linear exponential smoothing models have no equivalent ARIMA counterparts. There are also many ARIMA models that have no exponential smoothing counterparts. In particular, every ETS model is non-stationary, while ARIMA models can be stationary.

The ETS models with seasonality or non-damped trend or both have two unit roots (i.e., they need two levels of differencing to make them stationary). All other ETS models have one unit root (they need one level of differencing to make them stationary).

The following table gives some equivalence relationships for the two classes of models.

ETS modelARIMA modelParameters
ETS(A,N,N)ARIMA(0,1,1)   θ1=α1  θ1=α−1
ETS(A,A,N)ARIMA(0,2,2)
θ1θ2=α+β2=1αθ1=α+β−2ϕθ2=1−α
ϕ1θ1θ2=ϕ=α+ϕβ1ϕ=(1α)ϕϕ1=ϕθ1=α+ϕβ−1−ϕθ2=(1−α)ϕ
ETS(A,N,A)ARIMA(0,0,m)(0,1,0)m
ETS(A,A,A)ARIMA(0,1,m+1)(0,1,0)m

For the seasonal models, there are a large number of restrictions on the ARIMA parameters.

# Exercises

Figure 8.24: Left: ACF for a white noise series of 36 numbers. Middle: ACF for a white noise series of 360 numbers. Right: ACF for a white noise series of 1,000 numbers.
1. Figure 8.24 shows the ACFs for 36 random numbers, 360 random numbers and for 1,000 random numbers.
1. Explain the differences among these figures. Do they all indicate the data are white noise?
2. Why are the critical values at different distances from the mean of zero? Why are the autocorrelations different in each figure when they each refer to white noise?
2. A classic example of a non-stationary series is the daily closing IBM stock prices (data set ibmclose). Use R to plot the daily closing prices for IBM stock and the ACF and PACF. Explain how each plot shows the series is non-stationary and should be differenced.
3. For the following series, find an appropriate Box-Cox transformation and order of differencing in order to obtain stationary data.
1. usnetelec
2. usgdp
3. mcopper
4. enplanements
5. visitors
4. For the enplanements data, write down the differences you chose above using backshift operator notation.
5. Use R to simulate and plot some data from simple ARIMA models.
1. Use the following R code to generate data from an AR(1) model with ϕ1=0.6ϕ1=0.6 and σ2=1σ2=1. The process starts with y0=0y0=0.
R code
<- ts(numeric(100))
<- rnorm(100)
for(in 2:100)
y[i] <- 0.6*y[i-1] + e[i]
2. Produce a time plot for the series. How does the plot change as you change ϕ1ϕ1?
3. Write your own code to generate data from an MA(1) model with θ1=0.6θ1=0.6and σ2=1σ2=1.
4. Produce a time plot for the series. How does the plot change as you change θ1θ1?
5. Generate data from an ARMA(1,1) model with ϕ1ϕ1 = 0.6 and θ1=0.6θ1=0.6 and σ2=1σ2=1.
6. Generate data from an AR(2) model with ϕ1=0.8ϕ1=−0.8 and ϕ2=0.3ϕ2=0.3 and σ2=1σ2=1. (Note that these parameters will give a non-stationary series.)
7. Graph the latter two series and compare them.
6. Consider the number of women murdered each year (per 100,000 standard population) in the United States (data set wmurders).
1. By studying appropriate graphs of the series in R, find an appropriate ARIMA(p,d,qp,d,q) model for these data.
2. Should you include a constant in the model? Explain.
3. Write this model in terms of the backshift operator.
4. Fit the model using R and examine the residuals. Is the model satisfactory?
5. Forecast three times ahead. Check your forecasts by hand to make sure you know how they have been calculated.
6. Create a plot of the series with forecasts and prediction intervals for the next three periods shown.
7. Does auto.arima give the same model you have chosen? If not, which model do you think is better?
7. Consider the quarterly number of international tourists to Australia for the period 1999–2010. (Data set austourists.)
1. Describe the time plot.
2. What can you learn from the ACF graph?
3. What can you learn from the PACF graph?
4. Produce plots of the seasonally differenced data (1B4)Yt(1−B4)Yt. What model do these graphs suggest?
5. Does auto.arima give the same model that you chose? If not, which model do you think is better?
6. Write the model in terms of the backshift operator, and then without using the backshift operator.
8. Consider the total net generation of electricity (in billion kilowatt hours) by the U.S. electric industry (monthly for the period 1985–1996). (Data set usmelec.) In general there are two peaks per year: in mid-summer and mid-winter.
1. Examine the 12-month moving average of this series to see what kind of trend is involved.
2. Do the data need transforming? If so, find a suitable transformation.
3. Are the data stationary? If not, find an appropriate differencing which yields stationary data.
4. Identify a couple of ARIMA models that might be useful in describing the time series. Which of your models is the best according to their AIC values?
5. Estimate the parameters of your best model and do diagnostic testing on the residuals. Do the residuals resemble white noise? If not, try to find another ARIMA model which fits better.
6. Forecast the next 15 years of generation of electricity by the U.S. electric industry. Get the latest figures from http://data.is/zgRWCO to check on the accuracy of your forecasts.
7. How many years of forecasts do you think are sufficiently accurate to be usable?
9. For the mcopper data:
1. if necessary, find a suitable Box-Cox transformation for the data;
2. fit a suitable ARIMA model to the transformed data using auto.arima();
3. try some other plausible models by experimenting with the orders chosen;
4. choose what you think is the best model and check the residual diagnostics;
5. produce forecasts of your fitted model. Do the forecasts look reasonable?
6. compare the results with what you would obtain using ets() (with no transformation).
10. Choose one of the following seasonal time series: condmilkhsalesuselec
1. Do the data need transforming? If so, find a suitable transformation.
2. Are the data stationary? If not, find an appropriate differencing which yields stationary data.
3. Identify a couple of ARIMA models that might be useful in describing the time series. Which of your models is the best according to their AIC values?
4. Estimate the parameters of your best model and do diagnostic testing on the residuals. Do the residuals resemble white noise? If not, try to find another ARIMA model which fits better.
5. Forecast the next 24 months of data using your preferred model.
6. Compare the forecasts obtained using ets().
11. For the same time series you used in exercise Q10, try using a non-seasonal model applied to the seasonally adjusted data obtained from STL. The stlf()function will make the calculations easy (with method="arima"). Compare the forecasts with those obtained in exercise Q10. Which do you think is the best approach?