Coffee, tea, and light breakfast provided.
Welcome and logistics from Emily Tseng (Microsoft Research / University of Washington) on behalf of the organizing team.
Presentation of accepted FAccT papers from scholars who are unable to travel to the main conference. Each paper will have a 15-minute slot, with a suggested split of 9-10 minutes for presentation and 5-6 minutes for audience Q&A. The paper order below is subject to change.
Session chair: Wesley Deng (Carnegie Mellon University)
1A. “It’s not a representation of me”: Examining Accent Bias and Digital Exclusion in Synthetic AI Voice Services
Shira Michel (Northeastern University), Sufi Kaur (Northeastern University), Sarah Elizabeth Gillespie (Northeastern University), Jeffrey Gleason (Northeastern University), Christo Wilson (Northeastern University), Avijit Ghosh (Hugging Face / University of Connecticut)
1B. Actions Speak Louder than Words: Agent Decisions Reveal Implicit Biases in Language Models
Yuxuan Li (Carnegie Mellon University), Hirokazu Shirado (Carnegie Mellon University), Sauvik Das (Carnegie Mellon University)
1C. What Remains Opaque in Transparency Initiatives: Visualizing Phantom Reductions through Devious Data Analysis
Lindsay Poirier (Smith College), Juniper Huang (Smith College), Casey MacGibbon (Smith College)
1D. The Brief and Wondrous Life of Open Models
Madiha Zahrah Choksi (Cornell University), Ilan Mandel (Cornell University), Sebastian Benthall (New York University School of Law)
Coffee, tea, and light bites provided.
Presentation of accepted FAccT papers from scholars who are unable to travel to the main conference. Each paper will have a 15-minute slot, with a suggested split of 9-10 minutes for presentation and 5-6 minutes for audience Q&A. The paper order below is subject to change.
Session chair: Jennah Gosciak (Cornell University)
2A. Privacy of Groups in Dense Street Imagery
Matt Franchi (Cornell University), Hauke Sandhaus (Cornell University), Madiha Zahrah Choksi (Cornell University), Severin Engelmann (Cornell University), Wendy Ju (Cornell University), Helen Nissenbaum (Cornell University)
2B. Technical Solutions to Emotion AI's Privacy Harms: A Systematic Literature Review
Shreya Chowdhary (University of Michigan), Alexis Shore Ingber (University of Michigan), Nazanin Andalibi (University of Michigan)
2C. Auditing the Audits: Lessons for Algorithmic Accountability from Local Law 144's Bias Audits
Marissa Gerchick (ACLU), Ro Encarnación (University of Pennsylvania), Cole Tanigawa-Lau (Stanford University), Lena Armstrong (Harvard University), Ana Gutierrez (ACLU), Danaé Metaxa (University of Pennsylvania)
Lunch options within <15min walk from MSR are here: https://sites.google.com/view/altfacct2025/lunch-recs
Presentation of accepted FAccT papers from scholars who are unable to travel to the main conference. Each paper will have a 15-minute slot, with a suggested split of 9-10 minutes for presentation and 5-6 minutes for audience Q&A. The paper order below is subject to change.
Session chair: Alice Qian Zhang (Carnegie Mellon University)
3A. Ownership, Not Just Happy Talk: Co-Designing a Participatory Large Language Model for Journalism
Emily Tseng (Microsoft Research / University of Washington), Meg Young (Data & Society), Marianne Aubin Le Quéré (Cornell University), Aimee Rinehart (The Associated Press), Harini Suresh (Brown University)
3B. AI Trust Reshaping Administrative Burdens: Understanding Trust-Burden Dynamics in LLM-Assisted Benefits Systems
Jeongwon Jo (University of Notre Dame), He "Albert" Zhang (Pennsylvania State University), Jie Cai (Tsinghua University), Nitesh Goyal (Google Research)
3C. Not Like Us, Hunty: Measuring Perceptions and Behavioral Effects of Minoritized Anthropomorphic Cues in LLMs
Jeffrey Basoah (University of Washington), Daniel Chechelnitsky (Carnegie Mellon University), Tao Long (Columbia University), Katharina Reinecke (University of Washington), Chrysoula Zerva (Instituto Superior Tecnico), Kaitlyn Zhou (Stanford University), Mark Diaz (Google Research), Maarten Sap (Carnegie Mellon University)
The main plenary session will break for small group discussions and mingling. Attendees will have the opportunity to join Birds of a Feather sessions (BoFs), or create new BoFs on-the-fly.
Space-limited CRAFT workshop: "Invisible By Design? Generative AI and Mirrors of Misrepresentation", led by Kimi Wenzel (CMU) and Avijit Ghosh (Hugging Face)
Limited to 30 participants, apply here.
BoF 1: "Global fairness, local harms: Rethinking AI safety across borders", led by Renata Barreto (UC Berkeley School of Law)
AI systems often claim to be “safe” or “fair,” but what do those terms mean outside Silicon Valley or Brussels? In this BoF, we’ll unpack what it takes to design safety protocols that account for structural inequality, local context, and the messy, political nature of real-world deployment.
BoF 2: "Fairness in Medical Machine Learning," led by Cynthia Feeney (Tufts University)
Discussion of what machine learning methods are needed to ensure medical applications of ML are equitable and how existing practices in medicine can inform the FAccT community's work.
BoF 3: "How should the cooperative movement respond to advancements in AI?" led by Jared Katzman (University of Michigan)
Let's talk about the challenges and opportunities AI poses to cooperative businesses and how we could design alternative ownership models that address some of the negative impacts of AI automation.
BoF 4: "The Current Research Ecosystem and the Future of FaacT Research", led by Jessica Forde (Brown University)
With federal funding being cut, possible decreases in student visa availability, and decreased ability to travel internationally, how do we continue to do FAccT research?
BoF 5: "Approaches to algorithmic justice", led by Princess Sampson (University of Pennsylvania)
There is much overlap between scholars who describe their contributions using FATE or algorithmic justice while conducting mixed-methods and interdisciplinary computing research to audit current systems, document user perspectives, prototype and evaluate novel user-serving tools and interventions, and establish how current practitioner and governance practices regarding bias and harm can become proactively and preventatively sociotechnical rather than reactive to scandal or regulation. What distinctions and demarcations exist for you?
BoF 6: "It's (Still) Power: Mechanisms for Tech Accountability to Drive Responsible Development", leader to be announced on-site
Work done by the FAccT community to interrogate and improve the social outcomes of AI and ML technology is necessary, but often far from sufficient in a global context that sees the entities with the most direct financial stake in its adoption subject to fewer checks and balances. Technical expertise has a role to play within and outside of technology companies in helping address these power disparities by e.g. reducing information asymmetries or empowering critics, but this work often faces significant barriers. Let's use this time to share learnings on what helps, what works, and how to do this work safely.
Coffee, tea, and light bites provided.
The keynote will be followed by an audience Q&A.
Title: Understanding the Harms of AI-Mediated Communication
Abstract: AI is now increasingly present in human-to-human communication, from our interpersonal exchanges, to our work artifacts, to our online communities and media – a phenomenon we called AI-Mediated Communication (AIMC). Since 2018, our studies in AIMC documented different types of harms that can result from the introduction of AI to human communication. My talk will cover some of these harms – including social systems harms where AI writing assistants can covertly shift our communication language, content, and even our attitudes; quality of service harms where such AI assistants differentially help people from different backgrounds; and what we termed “perceptual harms” where AI suspicion can reduce interpersonal evaluations and trust and do so differently for people from different groups.
Mor Naaman is Associate Dean for Faculty Affairs and Professor at the Jacobs Technion-Cornell Institute at Cornell Tech, where he holds the Don and Mibs Follett Chair, and in the Information Science Department at Cornell University. Mor leads a research group looking at topics at the intersection of technology, media and democracy. The group applies multidisciplinary techniques — from machine learning to qualitative social science — to study our information ecosystem and its challenges, with a special focus on AI-mediated communication and its impact on society.
Moderated by Alayna Kennedy
Responsible AI in industry remains a prominent and growing field, and many industry practitioners focus on operationalizing findings from RAI research into practice. This panel will provide an inside look on the many forms RAI work can take within industry, from full-time application work, process-based oversight, and technical implementation of new research. This panel will be of interest to anyone interested in learning about potential career pathways for RAI in industry.
This panel will feature 5-7 researchers hailing from a range of Fortune 500 firms, including from big tech, management consulting, e-commerce, and finance. Panelists' names and affiliations will be shared only in-person to preserve their privacy and allow them to speak freely. This session will not be live-streamed or recorded.
Closing remarks from Ashley Walker (Google) on behalf of the organizing team.
Let's keep the conversations going! We have a few options for post-conference receptions, and would welcome additional informal groups organized on-the-fly.
For those interested in a bar, a group will be at Kingston Hall. Look for Alayna Kennedy.
For those interested in a park, a group will be gathering at the southeast corner of Washington Square Park. Look for Meg Young.