Joint Affinity Group Poster Session
Queer in AI posters
William Agnew (University of Washington); Arjun Subramonian (UCLA); Juan Pajaro Velasquez (Youth Observatory ISOC); Ashwin S (QueerInAI)
AI, machine learning, and data science methods are already pervasive in our society and technology, affecting all of our lives in many subtle ways. Trustworthy AI has become an important topic because trust in AI systems and their creators has been lost, or was never present in the first place. Researchers, corporations, and governments have long and painful histories of excluding marginalized groups from technology development, deployment, and oversight. As a direct result of this exclusion, these technologies have long histories of being less useful or even harmful to minoritized groups. This infuriating history illustrates that industry cannot be trusted to self-regulate and why trust in commercial AI systems and development has been lost. We argue that any AI development, deployment, and monitoring framework that aspires to trust must incorporate both feminist, non-exploitative participatory design principles and strong, outside, and continual monitoring and testing. We additionally explain the importance of considering aspects of trustworthiness beyond just transparency, fairness, and accountability, specifically, to consider justice and shifting power to the people and disempowered as core values to any trustworthy AI system. Creating trustworthy AI starts by funding, supporting, and empowering groups like Queer in AI so the field of AI has the diversity and inclusion to credibly and effectively develop trustworthy AI. Through our years of work and advocacy, we have developed expert knowledge around questions of if and how gender, sexuality, and other aspects of identity should be used in AI systems and how harms along these lines should be mitigated. Based on this, we discuss a gendered approach to AI, and further propose a queer epistemology and analyze the benefits it can bring to AI.
Dylan Paré (University of Calgary); Scout Windsor (University of Calgary); John Craig (Queer Code Collective)
We present Mementorium, an interactive, branching narrative told in immersive, virtual reality (VR). The player uncovers the narrator’s memories of gender and sexuality-based marginalizations in STEM learning environments, moving from childhood to early adulthood. Mementorium’s design builds upon our previous designs and research on queer reorientations to computing and queer approaches to embodied learning in VR. When LGBTQ+ people’s exclusion is even acknowledged, approaches to addressing the problem often treat LGBTQ+ people as the problem: “We become a problem when we describe a problem” (Ahmed, 2017, p. 39). Framing LGBTQ+ people as the cause of their exclusion leads to solutions to entice and retain LGBTQ+ people in STEM. However, this fails to address issues that keep LGBTQ+ people from STEM fields. Mementorium aims to increase understanding of interpersonal and systemic factors contributing to LGBTQ+ exclusion from STEM learning and professions and encourage more expansive thinking and action in solidarity with LGBTQ+ people. Mementorium tells the story of a queer, nonbinary person interested in learning about technology but faces barriers to participation due to normative and oppressive ideas about gender and sexuality. Each of the memories that the player uncovers has three branching points in the narrative. First, the player uncovers the memory, revealing the harm caused by marginalization. Next, the player chooses their reaction to the situation that reorient players to the narrator’s experiences. Finally, the player chooses a future-oriented response to direct the narrator’s actions, offering choices for individual or group-oriented action or action on a larger scale of social change. We are researching Mementorium to see how players make sense of LGBTQ+ marginalizations as individual and systemic issues and how to reorient players toward counter-hegemonic actions that support marginalized people.
This work studies publications in the field of cognitive science and utilizes natural language processing (NLP) and graph theoretical techniques to connect the analysis of the papers' content (abstracts) to the context (citation, journals). We apply hierarchical topic modeling on the abstracts and community detection algorithms on the citation network, and measure content-context discrepancy to find academic fields that study similar topics but do not cite each other or publish in the same venues. These results show a promising, systemic framework to identify opportunities for scientific collaboration in highly interdisciplinary fields such as cognitive science and machine learning.
Safinah Arshad Ali (MIT)
This submission is a poetry reflecting on classification and belongingness.
Milind Agarwal (Johns Hopkins University)
A substantial majority of the world’s languages have no language technologies and NLP toolkits at all. With an increasing reliance on technology and the web, depriving people access to technology in their native language is indirectly causing a loss of language, culture, traditions, linguistic information, and a diminishing richness of the human experience. This harsh reality marks the 21st century as a pivotal time for researchers and engineers in NLP. As per linguists, nearly half of the world's 7000 languages will be extinct before the end of this very century. But what if the advances in natural language processing and computational linguistics could help us change course? There has been a wide range of efforts by research groups on low-resource and resource-poor languages for the purposes of machine translation, and on endangered languages for the purposes of documentation and preservation. But despite numerous efforts in the field, there is a lack of a clear sense of direction and a unified front to tackle this problem. This paper hopes to unravel the diverse computational efforts being undertaken for low-resource, resource-poor and endangered language research, the different data resource creation and extraction techniques, and modern deep learning and statistical models being used specifically for this domain
Code of Conduct
Information about NeurIPS safety team will be added soon, in case you need assistance before the conference with matters pertaining to the Code of Conduct or harassment, please contact the Queer in AI Safety Team. Please be assured that if you approach us, your concerns will be kept in strict confidence, and we will consult with you on any actions taken.
Speakers and Panelists
Panel: Caste in Institutions and Tech
Vipin P. Veetil
Vipin is an economist with a PhD from George Mason University. He works in the area of monetary and macroeconomics. He currently works at the Indian Institute of Technology Madras.
Nikita Sonavane (she/her)
Nikita Sonavane has worked as a legal researcher and an advocate for over three years. She is the co-founder of the Criminal Justice and Police Accountability Project (CPAProject) a Bhopal based litigation and research intervention focused on building accountability against criminalisation of marginalised communities by the Police and the criminal justice system. Her writings have been at the intersection of policing, caste and digitisation of the criminal justice in India. Nikita has previously worked as a Research Associate with the Centre for Social Justice (CSJ), Ahmedabad, on issues of local governance, forest rights, and gender in the Adivasi region of Dang in Gujarat. She graduated with a B.A. (Political Science) degree from St. Xavier’s College, Mumbai and an LL.B. degree from Government Law College, Mumbai in 2016. Nikita holds an LL.M in Law and Development degree from Azim Premji University (APU), Bangalore. Her writings have been published by the AI Now Institute at NYU, Indian Express, the Hindu, Caravan among others.
Palashi V (she/they)
Palashi is a PhD Candidate in Information Science at Cornell University. Her dissertation is an ethnography of locating caste and its relationship with gender in computing cultures of India and the Indian diaspora. Her project is a caste-critical analysis of upper-caste subjectivity in computing, specifically in women-in-technology initiatives, and how it shapes the experience of Dalit engineers. She is an engineer turned interdisciplinary scholar of the social and cultural worlds of computing at the intersection of Information Science, Anthropology, STS and Feminist Studies.
Her work has been published in CSCW, CHI, thewire.in, and other venues and has been supported by the Social Science Research Council, Cornell Institute of Social Sciences, Mario Einaudi Center for International Studies, Mellon Foundation and University of Siegen. She is a student and early-career representative of the Feminist Scholarship Division of the International Communication Association. She has a Bachelors in Technology in Information Communication Technology from DA-IICT, India. She has previously worked at a Big Four technology consulting firm and in organizations focused on feminist technologies in India. She tweets intermittently at @lapshiii.
Akhil Kang (he/they)
Akhil Kang is a Ph.D. candidate in Socio-cultural Anthropology at Cornell University. His Ph.D. project focuses on the anthropology of the elite. He is academically and politically interested in shifting the anthropological gaze away from lower caste individuals and understands victimhood and woundedness as articulated by upper caste individuals/savarnas. He is an interdisciplinary scholar working at the intersection of several fields including feminist and queer studies; affect and media studies, postcoloniality and biopolitics. He is currently conducting his fieldwork in parts of North India and his fieldwork is supported by the Wenner Gren Dissertation Fieldwork Grant.
Prior to enrolling at Cornell, Akhil received his B.A. LLB (Hons.) from NALSAR University of Law, Hyderabad, and is a registered Advocate with the Bar Council of Delhi. Born and raised in Jalandhar (Punjab, India), he has been involved in queer and anti-caste activism and human rights lawyering. He has worked on several projects including, the role of men and masculinity in child marriages in India (with AJWS), feminist law archiving (with McArthur Foundation), and understanding gender and the sexual in institutional student movements & political formations in India (with Ford Foundation). He writes about sex, desires and politics at https://www.desi-underground-gay.com/
Resources for Caste
1) The Annihilation of Caste (Print) Version
2) Unlearning Caste Supremacy Reading List by Equality Labs https://www.equalitylabs.org/castereadinglist
3) Dalit / Queer Literature
4) Birds of a Caste - How Caste Hierarchies Manifest in Retweet Behavior of Indian Politicians | Proceedings of the ACM on Human-Computer Interaction
Panel: Towards Animal-Centric AI
Sara Beery (she/her/hers)
Sara Beery has always been passionate about the natural world, and she saw a need for technology-based approaches to conservation and sustainability challenges. This led her to pursue a PhD at Caltech, where her research focuses on computer vision for global-scale biodiversity monitoring. She works closely with Microsoft AI for Earth and Google Research to translate her work into usable tools. Sara’s experiences as a professional ballerina, a queer woman, and a nontraditional student have taught her the value of unique and diverse perspectives in the research community. She’s passionate about increasing diversity and inclusion in STEM through mentorship and outreach.
Luisa Ruge (she/her)
Eight years ago, Luisa decided to pursue her goal of increasing animals' wellbeing through design by broadening her scope of the users she works with to include animals. Doing so has included her becoming a certified mobility assistance dog trainer, working at a dog day care, helping design the training facility for dogs who help veterans, and completing a PhD from the Open University in Animal Computer Interaction. Most recently she is an independent animal centered design consultant and the co-founder of Scout9, a company which aims to empower pet partners by helping them make better and more informed decisions on their dog's behalf .
Lydia X. Z. Brown (they/them/theirs/themself or no pronouns)