Dana Turjeman

Research

Publications (* denotes equal contribution)

Turjeman, Dana and Fred M. Feinberg, “When the Data Are Out: Measuring Behavioral Changes Following a Data Breach”.  

Forthcoming (2023), Marketing Science. Preprint available here.

As the quantity and value of data increase, so do the severity of data breaches and customer privacy invasions. While firms typically publicize their post-breach protective actions, little is known about the social, behavioral, and economic aftereffects of major breaches. Specifically, do individual customers alter their interactions with the firm, or do they continue with "business as usual"? We address this general issue via data stemming from a matchmaking website, one for those seeking an extramarital affair, that was breached. The data include de-identified profiles of paying male users from the United States, and their activities on the website since joining, and up to 3 weeks after, the disclosure of the data breach. A challenge in making causal inference(s) in the setting of a massive and highly publicized data breach is that all users were informed of the breach at the same time. In such cases of "information shock", there is no obvious control group. To resolve this problem, we propose Temporal Causal Inference: for each group of users who joined in a specific time period, we create an appropriate control group from all users who had joined prior to it. This procedure helps control for, among other elements, potential trends in both individual and temporal site usage that broadly fall under the rubric of "normal" usage trajectories. Following the construction of suitable control groups, we apply and extend several causal inference approaches. We adapt Causal Forests, among other forest-based methods) into Temporal Causal Forests, to better align 'temporal' inference settings. The combination of Temporal Causal Inference and Temporal Causal Forests methods allows us to extract insights regarding the homogeneous (average) treatment effect, along with nontrivial heterogeneity in responses to the data breach. Our analyses reveal that there is a decrease in the probability of being active in searching or messaging on the website, and a notable increase in the probability of deleting photos, ostensibly to avoid personal identification. We investigate several potential sources of heterogeneity in response to the breach announcement, and conclude with a discussion of both managerial consequences and policy considerations.

Ayalon, Oshrat, Dana Turjeman and Elissa Redmiles, “Exploring Privacy and Incentives Considerations in Adoption of COVID-19 Contact Tracing Apps”.

Forthcoming (2023), USENIX  (accepted paper available upon request).  

Mobile Health (mHealth) apps, such as COVID-19 contact tracing and other health-promoting technologies, help support personal and public health efforts in response to the pandemic and other health concerns. However, due to the sensitive data handled by mHealth apps, and their potential effect on people's lives, their widespread adoption demands trust in a multitude of aspects of their design. In this work, we report on a series of conjoint analyses (N = 1,521) to investigate how COVID-19 contact tracing apps can be better designed and marketed to improve adoption. Specifically, with a novel design of randomization on top of a conjoint analysis, we investigate people's privacy considerations relative to other attributes when they are contemplating contact-tracing app adoption. We further explore how their adoption considerations are influenced by deployment factors such as offering extrinsic incentives (money, healthcare) and user factors such as receptiveness to contact-tracing apps and sociodemographics. Our results, which we contextualize and synthesize with prior work, offer insight into the most desired digital contact-tracing products (e.g., app features) and how they should be deployed (e.g., with incentives) and targeted to different user groups who have heterogeneous preferences. 

Ying Fan*, A. Yeşim Orhun* and Dana Turjeman, “A Tale of Two Pandemics: The Enduring Partisan Differences in Actions, Attitudes, and Beliefs during the Coronavirus Pandemic”.

Forthcoming, 2023, PLOS One (accepted paper available here)

Early in the new coronavirus disease (COVID-19) pandemic, scholars and journalists noted partisan differences in behaviors, attitudes, and beliefs. Based on location data from a large sample of smartphones, as well as 13,334 responses to a proprietary survey spanning 10 months from April 1, 2020 to February 15, 2021, we document that the partisan gap has persisted over time and that the lack of convergence occurs even among individuals who were at heightened risk of death. Our results point to the existence and persistence of the interaction of partisanship and information acquisition.

Dooley, Samuel, Dana Turjeman, John P. Dickerson, and Elissa M. Redmiles, (2022) “Field Evidence of the Effects of Pro-sociality and Transparency on COVID-19 App Attractiveness”. 2022 ACM Conference on Human Factors in Computing Systems (CHI). Available on SocArXiv: https://osf.io/preprints/socarxiv/gm6js/ 

COVID-19 exposure-notification apps have struggled to gain adoption. Existing literature posits as potential causes of this low adoption: privacy concerns, insufficient data transparency, and the type of appeal used to pitch the pro-social behavior of installing the app. In a field experiment, we advertised CovidDefense, Louisiana's COVID-19 exposure-notification app, at the time it was released. We find that all three hypothesized factors - privacy, data transparency, and appeals framing - relate to app adoption, even when controlling for age, gender, and community density. Specifically, we find that collective-good appeals are effective in fostering pro-social COVID-19 app behavior in the field. Our results empirically support existing policy guidance on the use of collective-good appeals and offer real-world evidence in the on-going debate on the efficacy of such appeals. Further, we offer nuanced findings regarding the efficacy of transparency - about both privacy and data collection - in encouraging health technology adoption and pro-social COVID-19 behavior. Our results may aid in fostering pro-social public-health-related behavior and for the broader debate regarding privacy and data transparency in digital healthcare.

Turjeman, Dana and Fred M. Feinberg (2020), “Our Data Driven Future: Promise, Perils and Prognoses”. In Review of Marketing Research, Vol.17, pp. 105-121, Emerald Publishing Limited. Available on SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3440726 

Nowadays, most of our activities and personal details are recorded by one entity or another. These data are used for many applications that fundamentally enrich our lives, such as navigation systems, social networks, search engines, and health monitoring. On the darker side of data collection lie usages that can harm us and threaten our sense of privacy. Marketing, as an academic field and corporate practice, has benefited tremendously from this era of data abundance, but has concurrently heightened the risk of associated harms. 

In this paper, we discuss both the great advantages and potential harms ushered in by this era of data collection, as well as ways to mitigate the harms while maintaining the benefits. Specifically, we propose and discuss classes of potential solutions: methods for collecting less data overall, transparency of code and models, federated learning, identity management tools, among others. Some of these solutions can be implemented now, others require a longer horizon, but all can begin through the advocacy of Marketing Research. We also discuss possible ways to improve on the benefits of data collection – by developing methods to assist individuals pursue their long-term goals while advocating for privacy in such pursuits. 

Working Papers (* denotes equal contribution)

Tian, Longxiu*, Dana Turjeman* and Samuel Levy, “Privacy Preserving Data Fusion” (invited for revision, Marketing Science. Available on SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4451656)

Data fusion combines multiple datasets to make inferences that are more accurate, generalizable, and useful than those made with any single dataset alone. However, data fusion poses a privacy hazard due to the risk of revealing user identities. We propose a privacy preserving data fusion (PPDF) methodology intended to preserve user-level anonymity while allowing for a robust and expressive data fusion process. PPDF is based on variational autoencoders and normalizing flows, together enabling a highly expressive, nonparametric, Bayesian, generative modeling framework, estimated in adherence to differential privacy – the state-of-the-art theory for privacy preservation. PPDF does not require the same users to appear across datasets when learning the joint data generating process and explicitly accounts for missingness in each dataset to correct for sample selection. Moreover, PPDF is model-agnostic: it allows for downstream inferences to be made on the fused data without the analyst needing to specify a discriminative model or likelihood a priori. 

We undertake a series of simulations to showcase the quality of our proposed methodology. Then, we fuse a large-scale customer satisfaction survey to the customer relationship management (CRM) database from a leading U.S. telecom carrier. The resulting fusion yields the joint distribution between survey satisfaction outcomes and CRM engagement metrics at the customer level, including the likelihood of leaving the company’s services. Highlighting the importance of correcting selection bias, we illustrate the divergence between the observed survey responses vs. the imputed distribution on the customer base. Managerially, we find a negative, nonlinear relationship between satisfaction and future account termination across the telecom carrier’s customers, which can aid in segmentation, targeting, and proactive churn management. Overall, PPDF will substantially reduce the risk of compromising privacy and anonymity when fusing different datasets.

Research in (Less) Progress

Fan, Ying*, Yeşim A. Orhun* and Dana Turjeman, “Heterogeneous Actions, Beliefs, Constraints and Risk Tolerance During the COVID-19 Pandemic”.  

NBER Working Paper No. 27211: https://www.nber.org/papers/w27211 

During a pandemic, an individual's choices can determine outcomes not only for the individual but also for the entire community. Beliefs, constraints and preferences may shape behavior. This paper documents demographic differences in behaviors, beliefs, constraints and risk preferences across gender, income and political affiliation lines during the new coronavirus disease (COVID-19) pandemic. Our main analyses are based on data from an original nationally representative survey covering 5,500 adult respondents in the U.S. We find substantial gaps in behaviors and beliefs across gender, income and partisanship lines; in constraints across income levels; and in risk tolerance among men and women. Based on location data from a large sample of smartphones, we also document significant differences in mobility across demographics, which are consistent with our findings based on the survey data. 

Turjeman, Dana and A. Yeşim Orhun, “Information Preferences on Information Collection”

We explore preferences to learn about privacy risks before adopting an online product or service. When people click “I’ve read and agree to the privacy policy”, they nearly always have not. This phenomenon is commonly described as “the biggest lie on the internet”. Several reasons have been proposed in the literature: Information Overload (length and readability of the policies), Digital Resignation (the feeling that one cannot do anything about them), and the common belief that the policies can change any time, among other reasons. We show that another reason for not attending to information about privacy, even if it is available and accessible to the customer, is active information avoidance. In study 1 we show that 20% of participants chose not to know whether a social platform they use collected their personal data, even when the answer is merely “yes” or “no”. When asked why they choose not to know, 78% of those who avoided the information stated they did so for reasons such as “the answer terrifies me” or because “ignorance is a bliss”. These people’s decisions were driven mainly by a desire to avoid information to manage their anticipated emotional responses and are referred to as “active information avoidance”. The rest (22%) of the people who did not want to be informed claimed low interest as the reason (e.g., they are not sharing information on the social platform, or they do not care about privacy). These people’s decisions were driven by an evaluation of the value of information: if the information would not change the way they behaved, it was redundant. Therefore, in Study 2 we focus on teasing apart active information avoidance from a perceived low value of information (“I do not want to know about privacy even though, or because, I care”, vs. “I do not care about privacy”). Teasing apart these two reasons requires knowing individual preferences for privacy. To determine the actual value of information at the individual level, we first measure individual preferences towards privacy - this informs us with the individual part worth of each attribute, and from this – estimate the value of actively remaining ignorant about the attribute. We then contrast this with the willingness to pay for information on each attribute compared to other attributes. This allows us to measure the willingness to stay ignorant (i.e., avoid information) about privacy (relatively to other attributes). Understanding what drives people to actively seek or avoid information on privacy risks (such as privacy policies and data breach announcements) may improve the way these privacy risks are presented, so as to avoid mistrust, consumer harm, and false sense of privacy.

Turjeman, Dana, A. Yeşim Orhun and Dan Ariely, “Useful Sharing: The Role of Accountability in Financial Decisions”