AI systems increasingly participate in the social world. They help diagnose patients, screen applicants, allocate resources, and issue recommendations that often count as institutional decisions. These developments raise pressing, yet under-explored, philosophical questions: What is the ontological status of AI within our institutions? Could AI systems ever count as members of groups, rather than as non-member tools or instruments? And if they could be members, how would their presence alter the agency and moral properties of the groups to which they belong? My current research examines the ways in which widespread AI integration in institutions requires including AI alongside humans in our theorizing about the social world.
Alongside this work, I also have broader research interests in technology ethics, especially character-based approaches to understanding how we can live well with emerging technologies. In a series of published papers, I examine how we can repurpose the virtue of temperance to help us think about navigating a world saturated with digital technologies.