2019 SIDE Summer School

2019 SIdE Summer School of Econometrics

First of all, SIdE is the acronym for the Italian Econometric Association.

For the 2019 SIdE Summer School of Econometrics there will be a location change: the Summer School will be held in Bertinoro (FC) (close to Forlì) and not at the Sadiba Center in Perugia as it has been in the last 9 years.

We will have a first summer school on Principles, Ideas and Theory in Econometric Time Series with Example from Cointegration, Bootstrap, ARCH, State Space and Big Data Models with Soren Johansen (University of Copenhagen, Denmark) and Anders Rahbek (University of Copenhagen, Denmark) from June 17 through June 22, 2019. 

We will also have a second summer school on Machine Learning Algorithms for Econometricians with Arhtur Charpentier (UQAM, Canada) and Emmanuel Flachaire (AMSE, France) from July 15 through July 20, 2019.

The dates of the 2019 Summer School are: 

-    from June 17 through June 22 for Principles, Ideas and Theory in Econometric Time Series with Example from Cointegration, Bootstrap, ARCH, State Space and Big Data Models with Soren Johansen and Anders Rahbek

-    from July 15 through July 20 for Machine Learning Algorithms for Econometricians with Arhtur Charpentier and Emmanuel Flachaire.

See Details and syllabus below:

Past Summer School Evaluations

-    2018 Students' Evaluations: 

-    2017 Students' Evaluations:

First week

Principles, Ideas and Theory in Econometric Time Series

with Examples from Cointegration, Bootstrap, ARCH, State Space and Big Data Models

Speakers: 

Dates: June 17th - June 22nd, 2019

Syllabus in pdf (link)

Course Description

The course will be in two main parts: 

The first part discusses econometric methods and theory, which are applied in the second part, where selected topics from cointegration, state space models, the bootstrap and multivariate ARCH models, as well as big data modelling will be discussed in detail from recent research.

In part I, we give an introduction, aimed for graduate/Ph.D. level students in econometrics, to 

(i) asymptotic theory for stationary, i.i.d. as well as non-stationary (integratedof order one) variables; 

(ii) theory for the bootstrap; 

(iii) theory for cointegration and for (multivariate) ARCH models; and, 

(iv) theory for the Kalman Filter. 

All theory presented will be in terms of examples where details are explained, rather than providing a general introduction to the field(s).

In part II, we discuss recent research with reference to the theory and methodology introduced in Part I. The topics include:

(i) Cointegration and adjustment in a common trends causal model and the role of weak exogeneity.

(ii) Optimal hedging and cointegration in the presence of heteroscedastic errors.

(iii) Bootstrap based inference in stationary and non-stationary (conditionally heteroscedastic) autoregressive models.

(iv) Models, Methods and Big Data

Topics CoveredTime series, Cointegration, Bootstrap, Testing, ARCH models, State Space models, Big data models, etc.  

Structure

Part I: Introduction to the theory of:

Asymptotic theory for i.i.d., stationary and non-stationary univariate variables

We consider some simple statistical models and discuss a general methodology for conducting likelihood inference.

Example 1. The univariate AR model,

x_t = \rho x_{t-1} + \varepsilon_t 

for \rho = 1 and \rho < 1.

Example 2. The univariate ARCH model,

x_t = \sqrt{1 + \rho x^2_{t-1} } z_t

for \rho such that x_t is (non-)stationary.

Example 3. The common trends model for observation y_t and the unobserved state

variable \alpha_t is given by

y_t = \beta \alpha_{t-1} + \varepsilon_t;

\alpha_t = \rho \alpha_{t-1} + \eta_t:

References:

Jensen, S.T. and A. Rahbek (2004), Asymptotic Inference for Nonstationary GARCH, Econometric Theory, 20:1203–1226.

Johansen, S. and A. Rahbek (2019 ) Lecture notes, unpublished.

Kristensen, D. and A. Rahbek (2005) Asymptotics of the QMLE for a Class of ARCH(q) Models, Econometric Theory, 21:946–961.

Kristensen, D. and A. Rahbek (2010), Likelihood-based Inference for Cointegration with Nonlinear Error-Correction, Journal of Econometrics, 158:78–94.

Theory of the Bootstrap

Example 4. The AR(1) bootstrap,

x_t^* = \rho^* x_{t-1}^* + \varepsilon_t^*

where the bootstrap process x_t^* is resampled as a function of the bootstrap parameter, \rho^*, and the bootstrap innovations, \varepsilon_t^*, where in general \rho^*, and \varepsilon_t^* are functions of the original data, x_1,\ldots,x_T.

Example 5. The ARCH bootstrap,

x_t^* = \sqrt{ 1+ \rho^* x_{t-1}^{*2} } z_t^*

for \rho^* and z_t^* functions of the original data x_1,\ldots,x_T.

Literature:

Cavaliere, G. and A. Rahbek (2012), Bootstrap Determination of the Co-Integration

Rank in Vector Autoregressive Models, Econometrica, 80:1721-1740.

Cavaliere, G., H.B. Nielsen and A. Rahbek (2017), On the Consisteny of the Bootstrap

Testing for a Parameter on the Boundary of the Parameter Space, Journal of Time Series

Analysis, 38:513-534.

Theory for the CVAR and Multivariate ARCH

We consider again some examples where here \rho is a (p \times p)-dimensional matrix.

Example 6. The cointegrated vector autoregressive model (CVAR) for multivariate cointegration,

x_t = \rho x_{t-1} + \varepsilon_t, \rho=\alpha \beta'

Example 7. The multivariate autoregressive conditional heteroscedastic model (ARCH) model,

x_t = \Omega_t z_t, \Omega_t = I + \rho x_{t-1} x_{t-1}' \rho.

Literature:

Johansen and Rahbek, (2019) Lecture notes, unpublished.

Probabilistic and statistical analysis of the common trends model

The multivariate common trends model for observation xt \in R^p and unobserved state variable \aplha_t \in R^m is given by

x_t = \beta \alpha_{t-1} + \varepsilon_t

\alpha_t = \rho \alpha_{t-1} + \eta_t

The lecture will discuss identification of the parameters, and simple inference for . \beta based on a regression estimator for .\beta. 

The Gaussian likelihood can be calculated using the Kalman Filter, and we discuss the

prediction error formulation of the model, and the diffuse and conditional likelihood.

Based on this, we discuss existence, consistency and asymptotic distribution of the maximum likelihood estimator, using score and information.

Literature:

Johansen, S. (2018) Inference in a simple nonstationary state space model. Unpublished

Chang, Y., J. I. Miller, and J. Y. Park (2009) Extracting a common stochastic trend:

Theory with some applications. Journal of Econometrics, 150, 231–247.

Part II: Research Topics:

Cointegration and adjustment in a common trends causal model and the role of

weak exogeneity.

The lectures will contain a discussion of causal model for stationary variable and a new

causal model for nonstationary variables.

A simple CVAR(1) model for some observed variables, x_t, and some unobserved variables, \tau_t, is defined and the question of weak exogeneity in the derived model for the observations is discussed.

The techniques used in the discussion are: Unobserved components models and their

CVAR(\infty) representation. The Kalman Filter technique for deriving a random walk representation of the conditional mean of the unobserved component, E(\tau_t | x_0, \ldots, x_t) and some results from control theory are used to show the existence of the limiting conditional variance of the unobserved component as the solution of a matrix Riccatti equation. A few examples will be used for illustration.

Literature:

Johansen, S. (2019) Cointegration and Adjustment in the infinite order CVAR representation of some partially observed CVAR(1) models, Econometrics, 7:2.

Optimal hedging and cointegration in the presence of heteroscedastic errors

The role of cointegration is analysed for optimal hedging of an h-period portfolio. Prices are assumed to be generated by a cointegrated vector autoregressive model allowing for stationary martingale errors, satisfying a mixing condition and hence some heteroscedasticity.

The risk of a portfolio is measured by the conditional variance of the h-period return given information at time t. If the price of an asset is nonstationary, the risk of keeping the asset for h periods diverges for large h. The h-period minimum variance hedging portfolio is derived, and it is shown that it approaches a cointegrating vector for large h, thereby giving a bounded risk. Taking the expected return into account, the portfolio that maximizes the Sharpe ratio is found, and it is shown that it also approaches a cointegration portfolio.

For constant conditional volatility, the conditional variance can be estimated, using regression methods or the reduced rank regression method of cointegration. In case of conditional heteroscedasticity, however, only the expected conditional variance can be estimated without modelling the heteroscedasticity. The findings are illustrated with a data set of prices of two year forward contracts for electricity, which are hedged by forward contracts for fuel prices. The main conclusion of the paper is that for optimal hedging, one should exploit the cointegrating properties for long horizons, but for short horizons more weight should be put on the remaining dynamics.

Literature:

Gatarek, L. and Johansen, S. (2019) The role of cointegration for optimal hedging with heteroscedastic error term. Unpublished.

Bootstrap based inference in stationary and non-stationary (conditionally heteroscedastic) autoregressive models: Hybrid and shrinking bootstrap.

In this lecture we discuss the general application of the bootstrap for statistical inference in econometric time series models.

We do this by considering in detail the implementation of bootstrap inference in the  class of double-autoregressive [DAR] models as well as ARCH models.

DAR models are particularly interesting to illustrate implementation of the bootstrap to time series: first, standard asymptotic inference is usually difficult to implement due to the presence of nuisance parameters under the null hypothesis; second, inference involves testing whether one or more parameters are on the boundary of the parameter space; third, under the alternative hypothesis, fourth or even second order moments may not exist. In most of these cases, the bootstrap is not considered an appropriate tool for inference. Conversely, and taking testing (non-) stationarity to illustrate, we show that although a standard bootstrap based on unrestricted parameter estimation is invalid, a correct implementation of a bootstrap based on restricted parameter estimation (restricted bootstrap) is first-order valid; that is, it is able to replicate, under the null hypothesis, the correct limiting null distribution. Importantly, we also show that the behaviour of this bootstrap under the alternative hypothesis may be different because of possible lack of finite second-order moments of the bootstrap innovations. This features makes –for some parameter configurations –the restricted bootstrap unable to replicate the null asymptotic distribution when the null is false. We will see that this drawback can be fixed by using a new 'hybrid' bootstrap, where the parameter estimates used to construct the bootstrap data are obtained with the null imposed, while the bootstrap innovations are sampled with replacement from the unrestricted residuals. We will discuss that this bootstrap, novel in this framework, mimics the correct asymptotic null distribution, respectively of the null to be true or false. Throughout, we use a number of examples from the bootstrap time series literature to illustrate the importance of properly defining and analyzing the bootstrap generating process and associated bootstrap statistics.

Literature:

Cavaliere and Rahbek, Econometric Theory Lecture 2019, Lecce, Italy, unpublished.

Model, methods and big data

"All models are wrong but some are useful". (George Box)

"All models are wrong, and increasingly you can succeed without them." (Peter

Norvig, Google’s research director) 

The lecture is about methods and models. "Methods" means algorithms, and Models" we know about. What is the interplay between the two? Are models obsolete? What is the role of models in our work? Do models come before methods or the other way around. Does the Big Data revolution try to solve the same problems as before, or do they attack new problems, that we could not even dream of?

To me a model is a way of expressing my understanding of what goes on, so we can communicate with others and construct thought experiments and real experiments, that can further our understanding of what is going on. The lecture will illustrate with a few historical examples of the interplay between models and methods, and will give a brief introduction to some new (model based) results on cointegration and big data.

Literature:

Onatski, A. and C. Wang. (2018) Alternative asymptotics for in cointegration tests in large VARs, Econometrica, Vol. 86, No. 4, 1465–1478. 

Chris Anderson (2008) The end of theory: The data deluge makes the scientific method obsolete. Wired Magazine. 

Chang, Y., C. Kim, and J. Park (2016). Nonstationarity in time series of state densities. Journal of Econometrics, 192:152–167. 

Beare, B. and W. Seo (2018). Representation of I(1) and I(2) autoregressive Hilbertian processes. In press. 

Franchi, M. and Paruolo, P. (2018) Cointegration in functional autoregressive processes. In press.

Second week

Machine Learning Algorithms for Econometricians

Speakers: 

Dates: July 15th - July 20th, 2019

Syllabus in pdf (link)

Course Description

Do you feel lost in the random forests? Do you need some career boosting? Would you like to demystify magic words like cross-validation, bagging, shrinkage, etc? Or discover what is hidden behind wild acronyms like GAM, LASSO, GBM, etc. that you heard during that meeting or at the coffee machine or at that seminar with a fancy title? 

If so then you should consider attending this one-week intensive course on machine learning techniques. 

These lectures has been conceived by econometricians for econometricians. The sessions proceed step by step, recalling the fundamental statistical concepts at the heart of the modern learning techniques. Their relative merits are illustrated by means of several case studies with real data. 

The course will present Machine Learning Techniques to econometricians. In particular, the lecturers will 

We will see along the lectures how to implement most of the techniques in R, with two “hands-on” sessions, one on classification problems and another one on regression, with real data. Participants are invited to bring their own laptop with R installed on it. Data sets and R code will be available through a supporting website.

Topics Covered

Algorithms, bagging, boosting, bootstrap, cross-validation, LASSO, misspecification, neural networks, nonlinearities, optimization, overfit, penalization, R, random forests, regression, splines, trees, etc.  

Structure

Monday July 15 

Tuesday July 16 

Wednesday July 17 

Thursday July 18 

Friday July 19 

Saturday July 20 

 

References

To enroll 

Please follow the link to the SIDE's website.

Organizer (on behalf of SIDE): Juri Marcucci

Venue: Centro Residenziale Universitario di Bertinoro, Via Frangipane, 6 - Bertinoro (FC), 47032, Italy

Some useful links:

If you have any further questions, please send an email to "Società Italiana di Econometria - SIdE" at info@side-iea.it

BACK to my Home Page