Deputation in the surveillance complex
I conceptualize advanced democracies’ systems of mass online surveillance as part of a “surveillance complex”. I discuss the US’ use of legal and informal mechanisms between governments and corporations to provide surveillance data, which I conceptualize as one of deputation. I produce a historical account of the political economy of the surveillance complex, highlighting the American government’s role in shaping high technological supply and low political accountability. This chapter contrasts alternative explanations of contemporary online surveillance which place companies as the main drivers, and governments passive or incapable of keeping up with technology. This distinction highlights the deployment of mass surveillance as a politically self-interested choice, making the invisible hand visible. To support this perspective, I give evidence of how companies’ ability to pursue profit models based on surveillance data (e.g., surveillance capitalism (Zuboff 2019)) has been fostered by direct government investments (e.g., NSF grants, data purchases), forbearance (e.g., nonenforcement of privacy laws), deregulation, and deliberate inaction (e.g., an explicit refusal to regulate, not just its absence/tolerance by default).
Measuring Privacy Concerns under the Surveillance Complex
This chapter introduces and describes my operationalization of the surveillance complex, testing news measures across different policies. Building on Helen Nissenbaum’s theory of contextual integrity, I hypothesize that people’s attitudes towards deputized surveillance are substantively (and statistically) different from their attitudes towards surveillance in general when it is produced and ensconced by the government and analogously by the market. To test my hypothesis, I surveyed 1,493 throughout August of 2021, asking them whether they strongly disagree, somewhat disagree, somewhat agree, or strongly agree that a range of surveillance policies should be implemented. I explore their opinion preferences using a data-reduction technique called Principal Components Analysis. I find that Americans’ responses produce a multi-dimensional space. This finding alone is a contribution to empirical privacy research. The few researchers who work on multiple surveillance sources tend to average out all sorts of practices, whether unattributed (e.g., “I am concerned about my data”), from governments, or companies, into a single index. While the state’s own surveillance and the market’s own surveillance policies produce two dimensions with some cross-“contamination”, deputized surveillance produces a clear and distinct different dimension of around. This bolsters a larger contribution to privacy studies: deputized surveillance is more than the sum of its parts. In contradistinction to previous theorizing about contextual integrity, but supporting Chapter 1’s claim that deputized surveillance allows the government to shirk political costs, I find that this distinct dimension of concerns is lower than surveillance capitalism and government surveillance. A deliberate impetus to be holistic but precise is better served by clarity in our language when theorizing and measuring the phenomenon.
Who is mass online surveillance for?
Building off chapters 1 and 2, I test the argument that support for mass online surveillance is bolstered by othering frames that make disliked groups more salient. This shift, inadvertently or deliberately, emphasizes a targeted harm effect after data collection that redefines surveillance. Using an experimental factorial vignette design, I test whether reminding people that liked groups are also monitored decreases support for two deputized surveillance policies. Emphasizing competing privacy claims and moving away from self-referential measures of privacy concerns allows me to measure the plasticity of people’s opinions, increasing or decreasing depending on their random assignation into treatment or control. This novel experiment also contributes to the field by challenging untested sociodemographic assumptions of privacy concerns, offering an alternative explanation that focuses on how individuals resolve competing claims for privacy and the role of securitized frames offered by those supplying surveillance on demand for these policies.
European attitudes towards privacy and security as a policy tradeoff
Scholars generally assume citizens treat security as a valance issue and accept increased government monitoring as a tradeoff. However, these assumptions have not been systematically or comparatively studied, so American preference studies drive the tradeoff narrative in the context of terrorism (Garcia and Geva 2016, Huddy, et al. 2005). In the wake of increased scrutiny over commercial privacy practices, the past years’ debate in Europe has shifted towards regulating the corporate handling of personal information. Using the 2014 and 2019 European Election Voter Study, this chapter pivots back to countries’ regulating their own data collection. I argue that juxtaposing two valence issues (security and privacy) creates a policy preference issue. What, if anything, explains Europeans’ privacy/security policy preferences? I use variables from six plausible voting behavior theories to explain voters’ policy preferences (Lipset and Rokkan’s frozen cleavage theory, Inglehart’s value change theory, vote choice, ideological preferences over state economic interventionism, individuals’ relation to their domestic government institutions, European Union transnationalism), the survey year, and sociodemographic (age, class, education), and news consumption controls. The models include country-fixed effects to account for omitted variable bias. Using a logit fixed effect model, I also model these relationships by focusing on the substantive differences between those who make a tradeoff and those who do not.
MA dissertation
An Architecture of Control: Automatic Takedown Notices in Academic Digital Publishing
A decade after graduating, UChicago's MAPSS program has still not made available the dissertation, so here's the original version and a shorter "updated" version.
Summary
The ease of digital distribution comes with an increased surveillance and control apparatus, the growth of digital private policing as a deliberate result of the US Digital Millennium Copyright Act and use Google’s transparency data on “takedown notices”. I analyze the industries’ lobbying Congress, the bill’s legal language and incentives, and I quantitatively examine over a decade of copyright holders’ requests and Google’s role in enforcing intellectual property rights, particularly in academic works.