Research

Working Papers


The Unattractiveness of Indeterminate Dynamic Equilibria, with Paul Beaudry and Martin Ellison

Macroeconomic forces that generate multiple equilibria often support locally-indeterminate dynamic equilibria in which a continuum of perfect foresight paths converge towards the same steady state. The set of rational expectations equilibria (REE) in such environments can be very large, although the relevance of many of them has been questioned on the basis that they may not be learnable. In this paper we document the existence of a learnable REE in such situations. However, we show that the dynamics of this learnable REE do not resemble perturbations around any of the convergent perfect foresight paths. Instead, the learnable REE treats the locally-indeterminate steady state as unstable, in contrast to it resembling a stable attractor under perfect foresight.


Financial News Media and Volatility: is there more to Newspapers than News?

It is an open question what role the media plays in financial markets, whether it is a causal one, and if so whether this effect is of aggregate importance. Using a text mining approach, this paper identifies a robust link between media coverage in the Financial Times newspaper from 1998 to 2017 and a firm's intra-day stock price volatility is identified. By exploiting the timings associated with this effect, I argue that this represents a causal effect of media coverage on volatility. I show that the effect is not driven by persistence in volatility or by the media anticipating future newsworthy events. Using a topic modelling framework, I also show that it is not driven by the content of the coverage. Finally, the identified effects are used to investigate whether this volatility propagates across the stock market, showing that while volatility may spill over into firms related by the structure of the production network, the volatility due to media coverage does not. The paper concludes that while this effect of media coverage is potentially important at a firm-level, it has limited aggregate implications.


Qualitative Analysis at Scale: An Application to Aspirations in Cox's Bazaar, Bangladesh, with Vijayendra Rao, Monica Biradavolu, Arshia Haque, Afsana Khan, Nandini Krishnan and Peer Nagy

Qualitative work has found limited use in economics largely because it is difficult to analyze at scale due to the careful reading of text and human coding it requires. This paper presents a framework with which to extend a small set of hand-codings to a much larger set of documents using natural language processing (NLP) and thus analyze qualitative data at scale. We show how to assess the robustness and reliability of this approach, and demonstrate that it can allow the identi cation of meaningful patterns in the data that the original hand-coded sample is too small to identify. We apply our approach to data collected among Rohingya refugees and their Bangladeshi hosts in Cox's Bazaar, Bangladesh to build on work in Anthropology and Philosophy that distinguishes between "ambition" - specific goals, "aspiration" - transforming values, and "navigational capacity" - the ability to achieve ambitions and aspirations, and demonstrate that these distinctions can have important policy implications.


Nowcasting euro area GDP with news sentiment: a tale of two crises, ECB Working Paper Series, No 2616, with Eleni Kalamara and Lorena Saiz

This paper shows that newspaper articles contain timely economic signals that can materially improve nowcasts of real GDP growth for the euro area. Our text data is drawn from fifteen popular European newspapers, that collectively represent the four largest Euro area economies, and are machine translated into English. Daily sentiment metrics are created from these news articles and we assess their value for nowcasting. By comparing to competitive and rigorous benchmarks, we find that newspaper text is helpful in nowcasting GDP growth especially in the first half of the quarter when other lower-frequency soft indicators are not available. The choice of the sentiment measure matters when tracking economic shocks such as the Great Recession and the Great Lockdown. Non-linear machine learning models can help capture extreme movements in growth, but require sufficient training data in order to be effective so become more useful later in our sample.


The Shifting Focus of Central Bankers

This paper quantifies the focus of central bank communication and news media, offers an explanation for its variation over time, and shows a robust co-movement in this focus. A model of multidimensional uncertainty and limited communication is proposed to explain the shifting focus of central bank communication. Evidence from the Survey of Professional Forecasters is used to support this explanation, suggesting that the focus of the Federal Reserve's communication shifts to cover variables about which there is greater uncertainty. An event study approach is used to show a potentially causal influence of Federal Reserve communication on the focus of US news media, implying that central banks have some power to inform the public even if their own communication does not reach agents directly. Finally, we show that the communication of three different central banks (Federal Reserve, Bank of England and European Central Bank from 1997 to 2014) co-move, and that the focus of the Federal Reserve’s communication appears to lead that of other central banks.


Publications


An Adaptive Dynamical Model of Default Cascades in Financial Networks with Damian Smug, Peter Ashwin and Didier Sornette (forthcoming Quantitative Finance)

We present a model of the dynamics of the contagion in financial networks. We assume that the health of a financial institution is described by a single variable represent net worth as a proportion of asset holdings, that becomes zero at default. We argue that differences in the growth of assets and liabilities can give a stable defaulted as well as a stable healthy state. Stochastic balance sheet shocks can push an institution to the bankruptcy state and lead to further bankruptcy cascades. We introduce contagion between institutions by adapting the shape of the potential landscape so as to make it easier to default given others defaulted shortly beforehand, motivated by links between institutions' balance sheets. The introduced model provides a microscopic dynamical description of the default process, since the default events are constructed via a stochastic dynamical process, rather than just point event modelled by a point process. The correspondence that we find provides a stochastic micro-foundation of the models of defaults' intensity.


Bayesian Topic Regression for Causal Inference (2021) in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (pp. 8162-8188), with Maximillian Ahrens, Jan-Peter Calliess and Vu Nguyen

Causal inference using observational text data is becoming increasingly popular in many research areas. This paper presents the Bayesian Topic Regression (BTR) model that uses both text and numerical information to model an outcome variable. It allows estimation of both discrete and continuous treatment effects. Furthermore, it allows for the inclusion of additional numerical confounding factors next to text data. To this end, we combine a supervised Bayesian topic model with a Bayesian regression framework and perform supervised representation learning for the text features jointly with the regression parameter training, respecting the Frisch-Waugh-Lovell theorem. Our paper makes two main contributions. First, we provide a regression framework that allows causal inference in settings when both text and numerical confounders are of relevance. We show with synthetic and semi-synthetic datasets that our joint approach recovers ground truth with lower bias than any benchmark model, when text and numerical features are correlated. Second, experiments on two real-world datasets demonstrate that a joint and supervised learning strategy also yields superior prediction results compared to strategies that estimate regression weights for text and non-text features separately, being even competitive with more complex deep neural networks.