We examine how firms describe their experiences in the aftermath of a sudden declaration in 2016 of demonetization in India that rendered 86% of cash in circulation no longer legal tender. We gauge firm exposure to the policy shock by the relative frequency of demonetization mentions in its financial reports. We also apply topic modeling to these reports to discern the different ways that firms were impacted. We find that firms are differentially exposed, with construction being most impacted and education and health services the least. Small firms are more exposed than large firms, although firms of all sizes and industries express uncertainty and worry about future. Remarkably, even more than concerns about cash absence, the largest impact was uncertainty about the future.
In recent years, experts across a variety of social science and humanistic disciplines have adopted natural language processing technologies to help assist their analyses of large text collections. However, these new projects in computational text analysis are often stymied by obstacles in the critical human work of applying these models: obtaining access to data in a useful format, implementing a processing workflow that attends to things the expert cares about, and analyzing the limited information that a model of text can reflect. In this talk, I discuss how it can be hard for text analysis novices to navigate the underspecified “recipes” of the text analysis process, focusing specifically on LDA topic models. I touch on research I’ve done with students in understanding how text analysis practitioners make meaning from LDA models and how we can build software to better support their work. Expect many baking analogies.