TUILES: Tuesday Informal Lunchtime Seminars Spring 2015
Tuesday, April 14th
KMC 8-170
Lunch will be available at 12:15 pm. The seminar begins at 12:30 pm.
"A Privacy Cloaking Device:
Enhancing Transparency and Control when Drawing Data-Driven Inferences about Individuals Online"
Daizhuo Chen, Samuel Fraiberger, Rob Moakler, and Foster Provost
Recent studies show the remarkable power of information disclosed by users on social network sites to infer the users' personal characteristics via predictive modeling. In response, attention is turning increasingly to the transparency that sites provide to users as to what inferences are drawn and why, as well as to what sort of control users can be given over inferences that are drawn about them. We draw on the \textit{evidence counterfactual} as a means for providing transparency into the specific reason(s) why particular inferences are drawn about them. We then introduce the idea of a "cloaking device" as a vehicle to provide (and to study) control. Specifically, the cloaking device provides a mechanism for users to inhibit the use of particular pieces of information in inference; combined with the evidence counterfactual a user can control model-driven inferences, hopefully with a minimal amount of disruption to her normal activity. Using these analytical tools we ask two main questions: (1) How much information must users cloak in order to significantly affect inferences about their personal traits? We find that usually a user must cloak only a small portion of her actions in order to inhibit inference. We also find that, encouragingly, false positive inferences are significantly easier to cloak than true positive inferences. (2) Can firms change their modeling behavior to make cloaking more difficult? The answer is a definitive yes. In our main results we replicate the methodology of Kosinski et al. (PNAS 2013) for modeling personal traits; then we demonstrate a simple modeling change that still gives accurate inferences of personal traits, but requires users to cloak substantially more information to affect the inferences drawn. The upshot is that organizations can provide transparency and control even into complicated, predictive model-driven inferences, but they also can make modeling choices to make control easier or harder for their users.