Trang chủ‎ > ‎IT‎ > ‎Data Mining‎ > ‎Time Series Analysis‎ > ‎ARIMA, AR, MA ...‎ > ‎

Autoregressive Integrated Moving Average Model

An ARIMA model is a class of statistical model for analyzing and forecasting time series data.

ARIMA is an acronym that stands for AutoRegressive Integrated Moving Average. It is a generalization of the simpler AutoRegressive Moving Average and adds the notion of integration.

This acronym is descriptive, capturing the key aspects of the model itself. Briefly, they are:

• ARAutoregression. A model that uses the dependent relationship between an observation and some number of lagged observations.
• IIntegrated. The use of differencing of raw observations (i.e. subtracting an observation from an observation at the previous time step) in order to make the time series stationary.
• MAMoving Average. A model that uses the dependency between an observation and residual errors from a moving average model applied to lagged observations.

Each of these components are explicitly specified in the model as a parameter.

A standard notation is used of ARIMA(p,d,q) where the parameters are substituted with integer values to quickly indicate the specific ARIMA model being used.

The parameters of the ARIMA model are defined as follows:

• p: The number of lag observations included in the model, also called the lag order.
• d: The number of times that the raw observations are differenced, also called the degree of differencing.
• q: The size of the moving average window, also called the order of moving average.

Box-Jenkins Method

The Box-Jenkins method was proposed by George Box and Gwilym Jenkins in their seminal 1970 textbook Time Series Analysis: Forecasting and Control.

The approach starts with the assumption that the process that generated the time series can be approximated using an ARMA model if it is stationary or an ARIMA model if it is non-stationary.

The 2016 5th edition of the textbook (Part Two, page 177) refers to the process as a stochastic model building and that it is an iterative approach that consists of the following 3 steps:

1. Identification. Use the data and all related information to help select a sub-class of model that may best summarize the data.
2. Estimation. Use the data to train the parameters of the model (i.e. the coefficients).
3. Diagnostic Checking. Evaluate the fitted model in the context of the available data and check for areas where the model may be improved.

It is an iterative process, so that as new information is gained during diagnostics, you can circle back to step 1 and incorporate that into new model classes.

Let’s take a look at these steps in more detail.

1. Identification

The identification step is further broken down into:

1. Assess whether the time series is stationary, and if not, how many differences are required to make it stationary.
2. Identify the parameters of an ARMA model for the data.

1.1 Differencing

Below are some tips during identification.

• Unit Root Tests. Use unit root statistical tests on the time series to determine whether or not it is stationary. Repeat after each round of differencing.
• Avoid over differencing. Differencing the time series more than is required can result in the addition of extra serial correlation and additional complexity.

1.2 Configuring AR and MA

Two diagnostic plots can be used to help choose the p and q parameters of the ARMA or ARIMA. They are:

• Autocorrelation Function (ACF). The plot summarizes the correlation of an observation with lag values. The x-axis shows the lag and the y-axis shows the correlation coefficient between -1 and 1 for negative and positive correlation.
• Partial Autocorrelation Function (PACF). The plot summarizes the correlations for an observation with lag values that is not accounted for by prior lagged observations.

Both plots are drawn as bar charts showing the 95% and 99% confidence intervals as horizontal lines. Bars that cross these confidence intervals are therefore more significant and worth noting.

Some useful patterns you may observe on these plots are:

• The model is AR if the ACF trails off after a lag and has a hard cut-off in the PACF after a lag. This lag is taken as the value for p.
• The model is MA if the PACF trails off after a lag and has a hard cut-off in the ACF after the lag. This lag value is taken as the value for q.
• The model is a mix of AR and MA if both the ACF and PACF trail off.

2. Estimation

Estimation involves using numerical methods to minimize a loss or error term.

We will not go into the details of estimating model parameters as these details are handled by the chosen library or tool.

I would recommend referring to a textbook for a deeper understanding of the optimization problem to be solved by ARMA and ARIMA models and optimization methods like Limited-memory BFGS used to solve it.

3. Diagnostic Checking

The idea of diagnostic checking is to look for evidence that the model is not a good fit for the data.

Two useful areas to investigate diagnostics are:

1. Overfitting
2. Residual Errors.

3.1 Overfitting

The first check is to check whether the model overfits the data. Generally, this means that the model is more complex than it needs to be and captures random noise in the training data.

This is a problem for time series forecasting because it negatively impacts the ability of the model to generalize, resulting in poor forecast performance on out of sample data.

Careful attention must be paid to both in-sample and out-of-sample performance and this requires the careful design of a robust test harness for evaluating models.

3.2 Residual Errors

Forecast residuals provide a great opportunity for diagnostics.

A review of the distribution of errors can help tease out bias in the model. The errors from an ideal model would resemble white noise, that is a Gaussian distribution with a mean of zero and a symmetrical variance.

For this, you may use density plots, histograms, and Q-Q plots that compare the distribution of errors to the expected distribution. A non-Gaussian distribution may suggest an opportunity for data pre-processing. A skew in the distribution or a non-zero mean may suggest a bias in forecasts that may be correct.

Additionally, an ideal model would leave no temporal structure in the time series of forecast residuals. These can be checked by creating ACF and PACF plots of the residual error time series.

The presence of serial correlation in the residual errors suggests further opportunity for using this information in the model.

The definitive resource on the topic is Time Series Analysis: Forecasting and Control. I would recommend the 2016 5th edition, specifically Part Two and Chapters 6-10.

Below are some additional readings that may help flesh out your understanding if you are looking to go deeper:

Autoregressive Integrated Moving Average Model

An ARIMA model is a class of statistical models for analyzing and forecasting time series data.

It explicitly caters to a suite of standard structures in time series data, and as such provides a simple yet powerful method for making skillful time series forecasts.

ARIMA is an acronym that stands for AutoRegressive Integrated Moving Average. It is a generalization of the simpler AutoRegressive Moving Average and adds the notion of integration.

This acronym is descriptive, capturing the key aspects of the model itself. Briefly, they are:

• ARAutoregression. A model that uses the dependent relationship between an observation and some number of lagged observations.
• IIntegrated. The use of differencing of raw observations (e.g. subtracting an observation from an observation at the previous time step) in order to make the time series stationary.
• MAMoving Average. A model that uses the dependency between an observation and a residual error from a moving average model applied to lagged observations.

Each of these components are explicitly specified in the model as a parameter. A standard notation is used of ARIMA(p,d,q) where the parameters are substituted with integer values to quickly indicate the specific ARIMA model being used.

The parameters of the ARIMA model are defined as follows:

• p: The number of lag observations included in the model, also called the lag order.
• d: The number of times that the raw observations are differenced, also called the degree of differencing.
• q: The size of the moving average window, also called the order of moving average.

A linear regression model is constructed including the specified number and type of terms, and the data is prepared by a degree of differencing in order to make it stationary, i.e. to remove trend and seasonal structures that negatively affect the regression model.

A value of 0 can be used for a parameter, which indicates to not use that element of the model. This way, the ARIMA model can be configured to perform the function of an ARMA model, and even a simple AR, I, or MA model.

Adopting an ARIMA model for a time series assumes that the underlying process that generated the observations is an ARIMA process. This may seem obvious, but helps to motivate the need to confirm the assumptions of the model in the raw observations and in the residual errors of forecasts from the model.

Next, let’s take a look at how we can use the ARIMA model in Python. We will start with loading a simple univariate time series.

Shampoo Sales Dataset

This dataset describes the monthly number of sales of shampoo over a 3 year period.

The units are a sales count and there are 36 observations. The original dataset is credited to Makridakis, Wheelwright, and Hyndman (1998).

Download the dataset and place it in your current working directory with the filename “shampoo-sales.csv“.

Below is an example of loading the Shampoo Sales dataset with Pandas with a custom function to parse the date-time field. The dataset is baselined in an arbitrary year, in this case 1900.

 1234567891011 from pandas import read_csvfrom pandas import datetimefrom matplotlib import pyplot def parser(x): return datetime.strptime('190'+x, '%Y-%m') series = read_csv('shampoo-sales.csv', header=0, parse_dates=, index_col=0, squeeze=True, date_parser=parser)print(series.head())series.plot()pyplot.show()

Running the example prints the first 5 rows of the dataset.

 1234567 Month1901-01-01 266.01901-02-01 145.91901-03-01 183.11901-04-01 119.31901-05-01 180.3Name: Sales, dtype: float64

The data is also plotted as a time series with the month along the x-axis and sales figures on the y-axis. Shampoo Sales Dataset Plot

We can see that the Shampoo Sales dataset has a clear trend.

This suggests that the time series is not stationary and will require differencing to make it stationary, at least a difference order of 1.

Let’s also take a quick look at an autocorrelation plot of the time series. This is also built-in to Pandas. The example below plots the autocorrelation for a large number of lags in the time series.

 1234567891011 from pandas import read_csvfrom pandas import datetimefrom matplotlib import pyplotfrom pandas.tools.plotting import autocorrelation_plot def parser(x): return datetime.strptime('190'+x, '%Y-%m') series = read_csv('shampoo-sales.csv', header=0, parse_dates=, index_col=0, squeeze=True, date_parser=parser)autocorrelation_plot(series)pyplot.show()

Running the example, we can see that there is a positive correlation with the first 10-to-12 lags that is perhaps significant for the first 5 lags.

A good starting point for the AR parameter of the model may be 5. Autocorrelation Plot of Shampoo Sales Data

ARIMA with Python

The statsmodels library provides the capability to fit an ARIMA model.

An ARIMA model can be created using the statsmodels library as follows:

1. Define the model by calling ARIMA() and passing in the pd, and q parameters.
2. The model is prepared on the training data by calling the fit() function.
3. Predictions can be made by calling the predict() function and specifying the index of the time or times to be predicted.

Let’s start off with something simple. We will fit an ARIMA model to the entire Shampoo Sales dataset and review the residual errors.

First, we fit an ARIMA(5,1,0) model. This sets the lag value to 5 for autoregression, uses a difference order of 1 to make the time series stationary, and uses a moving average model of 0.

When fitting the model, a lot of debug information is provided about the fit of the linear regression model. We can turn this off by setting the disp argument to 0.

 123456789101112131415161718192021 from pandas import read_csvfrom pandas import datetimefrom pandas import DataFramefrom statsmodels.tsa.arima_model import ARIMAfrom matplotlib import pyplot def parser(x): return datetime.strptime('190'+x, '%Y-%m') series = read_csv('shampoo-sales.csv', header=0, parse_dates=, index_col=0, squeeze=True, date_parser=parser)# fit modelmodel = ARIMA(series, order=(5,1,0))model_fit = model.fit(disp=0)print(model_fit.summary())# plot residual errorsresiduals = DataFrame(model_fit.resid)residuals.plot()pyplot.show()residuals.plot(kind='kde')pyplot.show()print(residuals.describe())

Running the example prints a summary of the fit model. This summarizes the coefficient values used as well as the skill of the fit on the on the in-sample observations.

 12345678910111213141516171819202122232425262728 ARIMA Model Results==============================================================================Dep. Variable:                D.Sales   No. Observations:                   35Model:                 ARIMA(5, 1, 0)   Log Likelihood                -196.170Method:                       css-mle   S.D. of innovations             64.241Date:                Mon, 12 Dec 2016   AIC                            406.340Time:                        11:09:13   BIC                            417.227Sample:                    02-01-1901   HQIC                           410.098                         - 12-01-1903=================================================================================                    coef    std err          z      P>|z|      [95.0% Conf. Int.]---------------------------------------------------------------------------------const            12.0649      3.652      3.304      0.003         4.908    19.222ar.L1.D.Sales    -1.1082      0.183     -6.063      0.000        -1.466    -0.750ar.L2.D.Sales    -0.6203      0.282     -2.203      0.036        -1.172    -0.068ar.L3.D.Sales    -0.3606      0.295     -1.222      0.231        -0.939     0.218ar.L4.D.Sales    -0.1252      0.280     -0.447      0.658        -0.674     0.424ar.L5.D.Sales     0.1289      0.191      0.673      0.506        -0.246     0.504                                    Roots=============================================================================                 Real           Imaginary           Modulus         Frequency-----------------------------------------------------------------------------AR.1           -1.0617           -0.5064j            1.1763           -0.4292AR.2           -1.0617           +0.5064j            1.1763            0.4292AR.3            0.0816           -1.3804j            1.3828           -0.2406AR.4            0.0816           +1.3804j            1.3828            0.2406AR.5            2.9315           -0.0000j            2.9315           -0.0000-----------------------------------------------------------------------------

First, we get a line plot of the residual errors, suggesting that there may still be some trend information not captured by the model. ARMA Fit Residual Error Line Plot

Next, we get a density plot of the residual error values, suggesting the errors are Gaussian, but may not be centered on zero. ARMA Fit Residual Error Density Plot

The distribution of the residual errors is displayed. The results show that indeed there is a bias in the prediction (a non-zero mean in the residuals).

 12345678 count   35.000000mean    -5.495213std     68.132882min   -133.29659725%    -42.47793550%     -7.18658475%     24.748357max    133.237980

Note, that although above we used the entire dataset for time series analysis, ideally we would perform this analysis on just the training dataset when developing a predictive model.

Next, let’s look at how we can use the ARIMA model to make forecasts.

Rolling Forecast ARIMA Model

The ARIMA model can be used to forecast future time steps.

We can use the predict() function on the ARIMAResults object to make predictions. It accepts the index of the time steps to make predictions as arguments. These indexes are relative to the start of the training dataset used to make predictions.

If we used 100 observations in the training dataset to fit the model, then the index of the next time step for making a prediction would be specified to the prediction function as start=101, end=101. This would return an array with one element containing the prediction.

We also would prefer the forecasted values to be in the original scale, in case we performed any differencing (d>0 when configuring the model). This can be specified by setting the typargument to the value ‘levels’typ=’levels’.

Alternately, we can avoid all of these specifications by using the forecast() function, which performs a one-step forecast using the model.

We can split the training dataset into train and test sets, use the train set to fit the model, and generate a prediction for each element on the test set.

A rolling forecast is required given the dependence on observations in prior time steps for differencing and the AR model. A crude way to perform this rolling forecast is to re-create the ARIMA model after each new observation is received.

We manually keep track of all observations in a list called history that is seeded with the training data and to which new observations are appended each iteration.

Putting this all together, below is an example of a rolling forecast with the ARIMA model in Python.

 123456789101112131415161718192021222324252627282930 from pandas import read_csvfrom pandas import datetimefrom matplotlib import pyplotfrom statsmodels.tsa.arima_model import ARIMAfrom sklearn.metrics import mean_squared_error def parser(x): return datetime.strptime('190'+x, '%Y-%m') series = read_csv('shampoo-sales.csv', header=0, parse_dates=, index_col=0, squeeze=True, date_parser=parser)X = series.valuessize = int(len(X) * 0.66)train, test = X[0:size], X[size:len(X)]history = [x for x in train]predictions = list()for t in range(len(test)): model = ARIMA(history, order=(5,1,0)) model_fit = model.fit(disp=0) output = model_fit.forecast() yhat = output predictions.append(yhat) obs = test[t] history.append(obs) print('predicted=%f, expected=%f' % (yhat, obs))error = mean_squared_error(test, predictions)print('Test MSE: %.3f' % error)# plotpyplot.plot(test)pyplot.plot(predictions, color='red')pyplot.show()

Running the example prints the prediction and expected value each iteration.

We can also calculate a final mean squared error score (MSE) for the predictions, providing a point of comparison for other ARIMA configurations.

 1234567891011121314 predicted=349.117688, expected=342.300000predicted=306.512968, expected=339.700000predicted=387.376422, expected=440.400000predicted=348.154111, expected=315.900000predicted=386.308808, expected=439.300000predicted=356.081996, expected=401.300000predicted=446.379501, expected=437.400000predicted=394.737286, expected=575.500000predicted=434.915566, expected=407.600000predicted=507.923407, expected=682.000000predicted=435.483082, expected=475.300000predicted=652.743772, expected=581.300000predicted=546.343485, expected=646.900000Test MSE: 6958.325

A line plot is created showing the expected values (blue) compared to the rolling forecast predictions (red). We can see the values show some trend and are in the correct scale. ARIMA Rolling Forecast Line Plot

The model could use further tuning of the p, d, and maybe even the q parameters.

Configuring an ARIMA Model

The classical approach for fitting an ARIMA model is to follow the Box-Jenkins Methodology.

This is a process that uses time series analysis and diagnostics to discover good parameters for the ARIMA model.

In summary, the steps of this process are as follows:

1. Model Identification. Use plots and summary statistics to identify trends, seasonality, and autoregression elements to get an idea of the amount of differencing and the size of the lag that will be required.
2. Parameter Estimation. Use a fitting procedure to find the coefficients of the regression model.
3. Model Checking. Use plots and statistical tests of the residual errors to determine the amount and type of temporal structure not captured by the model.

The process is repeated until either a desirable level of fit is achieved on the in-sample or out-of-sample observations (e.g. training or test datasets).

The process was described in the classic 1970 textbook on the topic titled Time Series Analysis: Forecasting and Control by George Box and Gwilym Jenkins. An updated 5th edition is now available if you are interested in going deeper into this type of model and methodology.

Given that the model can be fit efficiently on modest-sized time series datasets, grid searching parameters of the model can be a valuable approach.