Research

Information-Theoretic Time-Varying Density Modeling (Job Market Paper)

Abstract
We present a comprehensive framework for constructing dynamic density models by combining elements of information theory with optimization techniques. Specifically, we propose to recursively update a time-varying conditional density by maximizing the log-likelihood contribution subject to a Kullback-Leibler (KL) regularization that penalizes deviations from a predicted density made at the previous period. The resulting Relative Entropy Adaptive Density (READY) update can be viewed as an intuitive estimator of the infeasible KL divergence minimizer relative to the true density. The READY update has attractive optimality properties, is reparametrization invariant and generalizes the idea of smoothing ex-post proxies, if they exist. For example, for the normal distribution our approach coincides with the ARMA(1,1) model for the conditional mean and the GARCH(1,1) model for the conditional variance. Furthermore, we show that score-driven models can be used as computationally attractive and generally accurate approximations of the READY problem, providing a global information-theoretic motivation for this class of models. Empirical usefulness is illustrated by the modeling of employment growth and asset volatility.

Preliminary version available below:

JMP 261122.pdf

Robust Observation-Driven Models using Proximal Parameter Updates

Joint work with Rutger-Jan Lange and Dick van Dijk

Abstract
We propose a novel observation-driven modeling framework that allows for time variation in the model's parameters using a proximal-parameter (ProPar) update. The ProPar update is the solution to an optimization problem that maximizes the logarithmic observation density with respect to the parameter, while penalizing the squared distance of the parameter from its one-step-ahead prediction. The associated first-order condition has the form of an implicit stochastic-gradient update; replacing this implicit update with its explicit counterpart yields the popular class of score-driven models. Key advantages of the ProPar setup are stronger invertibility properties (especially under model misspecification) as well as extended (global rather than local) optimality properties. For the class of postulated observation densities whose logarithm is concave, ProPar's robustness is evident from its (i) muted response to large shocks in endogenous and exogenous variables, (ii) stability under poorly specified learning rates, and (iii) global contractivity towards a pseudo-truth - in all cases, even under model misspecification. We illustrate the general applicability and the practical usefulness of the ProPar framework for time-varying regressions, volatility, and quantiles.

Working paper available

Pooling Dynamic Conditional Correlation Models

Joint work with Dick van Dijk

Abstract
The Dynamic Conditional Correlation (DCC) model by Engle (2002) has become an extremely popular tool for modeling the time-varying dependence of asset returns. However, applications to large cross-sections have been found to be problematic, due to the curse of dimensionality. We propose a novel DCC model with Conditional LInear Pooling (CLIP-DCC) which endogenously determines an optimal degree of commonality in the correlation innovations, allowing a part of the update to be of reduced dimension. In contrast to existing approaches such as the Dynamic EquiCOrrelation (DECO) model, the CLIP-DCC model does not restrict long-run behavior, thereby naturally complementing target correlation matrix shrinkage approaches. Empirical findings suggest substantial benefits for a minimum-variance investor in real-time. Combining the CLIP-DCC model with target shrinkage yields the largest improvements, confirming that they address distinct parts of uncertainty of the conditional correlation matrix.

Working paper available

Accelerating Peak Dating in a Dynamic Factor Markov-Switching Model

Joint work with Dick van Dijk

Abstract
The dynamic factor Markov-switching (DFMS) model introduced by Diebold and Rudebusch (1996) has proven to be a powerful framework to measure the business cycle. We extend the DFMS model by allowing for time-varying transition probabilities, with the aim of accelerating the real-time dating of business cycle peaks. Time-variation of the transition probabilities is brought about endogenously using the score-driven approach and exogenously using the term spread. In a real-time application using the four components of The Conference Board’s Coincident Economic Index for the period 1959-2020, we find that signaling power for recessions is significantly improved and we are able to date the 2001 and 2008 recession peaks four and two months after the peak date, which is four and ten months before the NBER.

Working paper available