This paper uses neural network learning to identify learnable rational expectations equilibria in environments where equilibrium behaviour is indeterminate under rational expectations in some regions of the state space. The identified rational expectations equilibria acts as a source in locally indeterminate regions, meaning that endogenous variables are repelled and spend very little time in their neighbourhood. These results contrast sharply with the perfect-foresight behaviour in these environments, in which locally indeterminate regions act as a sink, attracting endogenous variables to their neighbourhood. Previous work has analysed such systems under perfect foresight or perturbation around steady states, discussing behaviour in the locally indeterminate region as acting as a sink. Such emphasis would appear to be misplaced, since under rational expectations the locally indeterminate region is a source not a sink. It is also shown that more familiar learning algorithms, such as recursive least square will converge to qualitatively similar equilibria, but the flexibility of a neural network is necessary for this equilibrium to be consistent with rational expectations. These results have potentially important implications in a wide range of contexts, as demonstrated by applying neural network learning to a simple model in which monetary policy is constrained by a Zero Lower Bound. If the indeterminacy due to this constraint on policy is bounded, agents can learn a fully-stochastic equilibrium with multiple steady states where transitory shocks can have permanent effects.
It is an open question what role the media plays in financial markets, whether it is a causal one, and if so whether this effect is of aggregate importance. Using a text mining approach, this paper identifies a robust link between media coverage in the Financial Times newspaper from 1998 to 2017 and a firm's intra-day stock price volatility is identified. By exploiting the timings associated with this effect, I argue that this represents a causal effect of media coverage on volatility. I show that the effect is not driven by persistence in volatility or by the media anticipating future newsworthy events. Using a topic modelling framework, I also show that it is not driven by the content of the coverage. Finally, the identified effects are used to investigate whether this volatility propagates across the stock market, showing that while volatility may spill over into firms related by the structure of the production network, the volatility due to media coverage does not. The paper concludes that while this effect of media coverage is potentially important at a firm-level, it has limited aggregate implications.
Nowcasting euro area GDP with news sentiment: a tale of two crises, ECB Working Paper Series, No 2616, with Eleni Kalamara and Lorena Saiz
This paper shows that newspaper articles contain timely economic signals that can materially improve nowcasts of real GDP growth for the euro area. Our text data is drawn from fifteen popular European newspapers, that collectively represent the four largest Euro area economies, and are machine translated into English. Daily sentiment metrics are created from these news articles and we assess their value for nowcasting. By comparing to competitive and rigorous benchmarks, we find that newspaper text is helpful in nowcasting GDP growth especially in the first half of the quarter when other lower-frequency soft indicators are not available. The choice of the sentiment measure matters when tracking economic shocks such as the Great Recession and the Great Lockdown. Non-linear machine learning models can help capture extreme movements in growth, but require sufficient training data in order to be effective so become more useful later in our sample.
An Adaptive Dynamical Model of Default Cascades in Financial Networks with Damian Smug, Peter Ashwin and Didier Sornette (in submission)
We present a model of the dynamics of the contagion in financial networks. We assume that the health of a financial institution is described by a single variable represent net worth as a proportion of asset holdings, that becomes zero at default. We argue that differences in the growth of assets and liabilities can give a stable defaulted as well as a stable healthy state. Stochastic balance sheet shocks can push an institution to the bankruptcy state and lead to further bankruptcy cascades. We introduce contagion between institutions by adapting the shape of the potential landscape so as to make it easier to default given others defaulted shortly beforehand, motivated by links between institutions' balance sheets. The introduced model provides a microscopic dynamical description of the default process, since the default events are constructed via a stochastic dynamical process, rather than just point event modelled by a point process. The correspondence that we find provides a stochastic micro-foundation of the models of defaults' intensity.
This paper quantifies the focus of central bank communication and news media, offers an explanation for its variation over time, and shows a robust co-movement in this focus. A model of multidimensional uncertainty and limited communication is proposed to explain the shifting focus of central bank communication. Evidence from the Survey of Professional Forecasters is used to support this explanation, suggesting that the focus of the Federal Reserve's communication shifts to cover variables about which there is greater uncertainty. An event study approach is used to show a potentially causal influence of Federal Reserve communication on the focus of US news media, implying that central banks have some power to inform the public even if their own communication does not reach agents directly. Finally, we show that the communication of three different central banks (Federal Reserve, Bank of England and European Central Bank from 1997 to 2014) co-move, and that the focus of the Federal Reserve’s communication appears to lead that of other central banks.
Bayesian Topic Regression for Causal Inference (2021) in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (pp. 8162-8188), with Maximillian Ahrens, Jan-Peter Calliess and Vu Nguyen
Causal inference using observational text data is becoming increasingly popular in many research areas. This paper presents the Bayesian Topic Regression (BTR) model that uses both text and numerical information to model an outcome variable. It allows estimation of both discrete and continuous treatment effects. Furthermore, it allows for the inclusion of additional numerical confounding factors next to text data. To this end, we combine a supervised Bayesian topic model with a Bayesian regression framework and perform supervised representation learning for the text features jointly with the regression parameter training, respecting the Frisch-Waugh-Lovell theorem. Our paper makes two main contributions. First, we provide a regression framework that allows causal inference in settings when both text and numerical confounders are of relevance. We show with synthetic and semi-synthetic datasets that our joint approach recovers ground truth with lower bias than any benchmark model, when text and numerical features are correlated. Second, experiments on two real-world datasets demonstrate that a joint and supervised learning strategy also yields superior prediction results compared to strategies that estimate regression weights for text and non-text features separately, being even competitive with more complex deep neural networks.