My research examines the rise of digital intimacy and companion AI technologies, with a focus on how these systems impact youths’ social and psychological development. Building on a strong background in social media transparency, risk detection, and applied ethics—complemented by formal training in statistics—I use mixed-methods approaches to analyze large-scale datasets from social platforms and generative AI applications.
Broadly, my work falls under the domain of empirical ethics in digital intimacy: I employ both quantitative and qualitative methods to generate real-world evidence in support of ethical claims about emerging technologies and their influence on vulnerable users.
[June 2025 - Present]
With the rise of generative AI services like ChatGPT and LLaMA, companion AI has also emerged—applications designed to mimic human-to-human relationships. While digital intimacy is not unique to companion AI, having already gained prominence through large-scale social media, these systems pose new risks: they may normalize principles of non-consent and unhealthy interpersonal dynamics among the youth who use them. This research explores those dangers and aims to offer actionable, implementable safeguards to protect young people from the more harmful consequences of companion AI.
[October 2024 - Present]
AI and data ethics have very little to do with the ethical theories whose namesakes they borrow from. The training-testing data requirements for model development make reliance on ephemeral and culturally normative data unavoidable, but the implicit acceptance of cultural relativism in algorithm design is not inescapable. By encoding moral features and explicitly applying moral filters to algorithms, I aim to better align AI development with logically and morally defensible principles.
[October 2024 - May 2025]
Many social media platforms publish (bi)annual transparency reports that are meant to disclose the types of risks/harms that occur on their platforms and their enforcement policies for handling said risks/harms. In reality, most of these reports are largely emaciated, lacking either sufficient breadth or depth in the topics covered. As a result, social media companies can claim transparency without fulfilling the requirements of the term. This project makes explicit the gaps in these reports and offers a comprehensive taxonomy of the subjects that should be discussed.
Publications
Tyler Chang, Joseph J Trybala, Sharon Bassan, and Afsaneh Razi. 2025. Opaque Transparency: Gaps and Discrepancies in the Report of Social Media Harms. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA '25). Association for Computing Machinery, New York, NY, USA, Article 424, 1–12. https://doi.org/10.1145/3706599.3719829
[September 2024 - May 2025] [CURRENTLY PAUSED]
Many online sexual violence risk-detection systems function on a post-hoc basis, meaning that they only address the issue after the violence has already been done. This fails the victims and undermines the safety of social media platforms. By developing triaging criteria based on non-malicious risk profiles, this project aims to achieve real-time detection of online sexual violence and thereby preempt the need for human interventions or recall.