Time series models, non-stationarity and co-integration
- Some hypotheses and definitions
Moments of a stochastic process and stationarity concepts
- Modelling stationary stochastic processes
The White Noise process
Wald decomposition theorem and moving average (MA) representations
Stationary autoregressive (AR) processes
Mixed processes: ARMA models, maximum likelihood (ML) estimation, specification and forecasting
- Modelling non-stationary stochastic processes
Sources of non-stationarity in first and second moments
Testing for non-stationarity (and stationarity): Augmented Dickey-Fuller, Phillips-Perron and Ng-Perron tests (ADF-PP-NP)
Filtering and modelling non-stationary processes: Hodrick-Prescott (HP) filter and ARIMA modelling
Beveridge-Nelson trend-cycle decomposition (BN) - Hamilton's filter (HTCF)
- Non-stationarity and cointegration
Spurious relations: properties of estimators under stationarity, near-stationarity and non-stationarity
Concept of co-integration (CI)
Granger representation theorem and error-correction (EC) representation
Engle and Granger's and Autoregressive Distributed Lag (ARDL)-based EC models (ECMs)
(Non) CI tests: Cointegration Rank ADF (CRADF), loading coefficients-based and F-based tests
- Applications
Multivariate time series modelling
- The Vector-autoregressive model (VAR)
- The VAR with exogenous variables (VAR-X) and the Vector-autoregressive Distributed Lag model (VARDL)
- Identification issues: order and rank conditions for identification
- Estimation and model evaluation
- Forecasting and policy simulation
- Applications
Structural VARs
- Vector Moving Average (VMA) representation of a VAR
Impulse Response Function (IRF)
Forecast Error Variance Decomposition (FEVD)
Historical Decomposition (HD)
- Identification strategies for structural VARs
Cholesky decomposition
A-B model
Long-run exclusion restrictions
Set-identification, i.e., sign restrictions
The structural Vector-Error-correction model (SVEC): Common Trends approach to identification
VAR identification and instrumental variables
Proxy (IV)-VARs
Applications
Alternative methods for structural time series modelling
- Local Projection methods (LP) and VARs
Identification of LPs through IV-GMM
Identification of LPs through controls and exclusion restrictions
Applications
- Machine Learning methods (ML) for prediction and structural modelling
Basic ML estimators: Ridge, Lasso and Elastic Net
Predictive performances of ML methods in low and high-dimensional settings
Lasso-VARs, forecasting, nowcasting
Causal ML: LP identified through double-Post-Lasso/Elastic Net
Causal ML: LP identified through Double/Debiased ML (DML)
Applications
Reading
Johnston and DiNardo (1996). Econometric Methods IV edition. McGraw-Hill. ch. 9.4, 9.5 and 9.6
Kilian, L., Lutkephol, H. (2017). Structural Vector Autoregressive Analysis, Cambridge, Cambridge University Press.
Lutkephol, H. Kratzig, M. (2005). Applied Time Series Econometrics. Cambridge University Press
Hamilton, J. (1994). Time Series Econometrics. Princeton University Press
Further readings and material distributed during the lessons
Econometric/math packages
All the applications will be executed using Python. For some applications, Matlab codes are also available
Background knowledge
Students participating to the classes should be familiar with the basic topics in univariate and multivariate time series analysis.
Basic programming in Python and Matlab.
Lessons 2024-25
Tue: Room: 1-D - time: 10:00 a.m - 12 a.m.
Wed: Room: 1-D - time: 06:00 p.m - 08 p.m.
Thu: Room: 1-D - time: 10:00 a.m - 12 a.m.