Members: Shoshana Simons, Claire Bergey, Maria Ryskina, Rosemary ✿ (1), Lauren Oey, Andrew Buskell
Description: This project explores the ways in which emerging technologies surveil, discipline, and standardize intimate human faculties and the concomitant political side-effects. Machine learning technologies are often thought about as unidirectionally shaped by their creators and users. But how are we humans shaped by machines? In this project, we focus specifically on the text ‘prediction’ engine recently added to many Google technologies, like Google Docs. While this engine is supposedly learning from the linguistic patterns of its users, the influence in the other direction is undeniable to anyone who engages with these technologies. That is, not only do we users discipline the machine: the machine also disciplines us.
To interrogate the politics of text prediction, this project brings together historical, empirical, and critical methods. To test the influence of text prediction on language use in a controlled environment, we have participants respond to a prompt in a text box with (or without) text prediction. The text predictor is built out of ngram language models trained on COCA and child-directed speech....
Suggested readings:
M. NourbeSe Philip, "Discourse on the Logic of Language," https://www.youtube.com/watch?v=424yF9eqBsE