About

Queer in AI’s demographic survey reveals that most queer scientists in our community do not feel completely welcome in conferences and their work environments, with the main reasons being a lack of queer community and role models. Over the past years, Queer in AI has worked towards these goals, yet we have observed that the voices of marginalized queer communities - especially transgender, non-binary folks and queer BIPOC folks - have been neglected. The purpose of this workshop is to highlight issues that these communities face by featuring talks and panel discussions on the inclusion of neuro-diverse people in our communities, the intersection of queer and animal rights, as well as worker rights issues around the world.

The main topics of the workshop will revolve around:

  • caste in institutions and tech

Despite various protests and social movements, caste-based discrimination continues to affect the lives of Dalits, Bahujans and Adivasis in India and the Indian diaspora. To bring these experiences to light, we will provide the NeurIPS community with a brief history of caste, followed by discussions on how caste is embedded in institutions, how caste and technology co-construct each other and the intersection of these experiences with queerness. Finally, we will outline implications for making workspaces and technologies more inclusive for marginalized caste groups.

  • animal-centric AI

Many AI justice movements fight dehumanization, which is absolutely necessary, but also reinforces the human-animal binary and bases human dignity in not being treated like non-human animals. To discuss the harms and benefits AI systems bring to non-human animals, we are bringing together folks in AI for conservation, animal-centered design, AI for farming, and Indigenous AI. We’ll be discussing how AI can benefit conservation and farming, how to better center animals in AI design, and ways to mitigate risks that AI systems can pose to animals.

  • the intersections of AI, queer identity and neurodiversity

We'll explore the intersection of LGBTQIA+ and neurodivergence and what more can be done to address the topic with voices that identify as neurodivergent. Neurodivergence remains a taboo topic often clouded by stereotypes and prejudice, based upon past and current stigmatization.

  • queer identity, labor rights, and organization

The tech sphere has long been a place of little unionization and a strong focus on workers as individuals. But more and more companies such as Google and Amazon face pressure from budding labor organization from employees. In many, marginalized people are at the forefront of this development. In this panel, we explore three interlinked questions: what is the role of queer and marginalized people in labor movements in tech, what role should labor movements play in the era of big tech companies, and what impact does AI technology have on workers, especially marginalized workers.

  • a critical approach to algorithmic fairness

We'll discuss how to achieve justice with the current approaches of operationalizing fairness and how to apply social theoretical principles of conceptualizing multiple axes of oppression with the goal of algorithmic fairness.


Additionally, at Queer in AI’s socials at NeurIPS 2021, we will focus on creating a safe and inclusive casual networking and socializing space for LGBTQIA+ individuals involved with AI. Together, these components will create a community space where attendees can learn and grow from connecting with each other, bonding over shared experiences, and learning from each individual’s unique insights into AI, queerness, and beyond!

Accessibility

All panels will have English live captioning.

Schedule


Our workshop will be held from Tuesday, December 07 to Thursday, December 09.


In order to attend the panels and the socials, one must be registered for the NeurIPS 2021 workshop.


** Note: all times are in Eastern Standard Time / EST


Tuesday, December 07

00:00 - 02:00 (convert to your timezone) Joint Affinity Group Poster Session

08:00 - 09:00 (convert to your timezone) Panel: Caste in Institutions and Tech

09:00 - 10:00 (convert to your timezone) Panel: Towards Animal-Centric AI

14:00 - 16:00 (convert to your timezone) Social #1


Wednesday, December 08

13:30 - 14:30 (convert to your timezone) Queer in AI: Year in Review

14:30 - 15:30 (convert to your timezone) Panel: Neurodiversity

15:30 - 16:30 (convert to your timezone) Panel: Labor Rights


Thursday, December 09

08:00 - 10:00 (convert to your timezone) Social #2: A Critical Approach to Algorithmic Fairness


Joint Affinity Group Poster Session


The
Joint Affinity Group Poster Session will be held Tue 7 Dec midnight EST — 2 a.m. EST (convert to your timezone)

The posters are listed on the NeurIPS virtual site with direct links to the gather.town by affinity group.




Queer in AI posters

William Agnew (University of Washington); Arjun Subramonian (UCLA); Juan Pajaro Velasquez (Youth Observatory ISOC); Ashwin S (QueerInAI)

AI, machine learning, and data science methods are already pervasive in our society and technology, affecting all of our lives in many subtle ways. Trustworthy AI has become an important topic because trust in AI systems and their creators has been lost, or was never present in the first place. Researchers, corporations, and governments have long and painful histories of excluding marginalized groups from technology development, deployment, and oversight. As a direct result of this exclusion, these technologies have long histories of being less useful or even harmful to minoritized groups. This infuriating history illustrates that industry cannot be trusted to self-regulate and why trust in commercial AI systems and development has been lost. We argue that any AI development, deployment, and monitoring framework that aspires to trust must incorporate both feminist, non-exploitative participatory design principles and strong, outside, and continual monitoring and testing. We additionally explain the importance of considering aspects of trustworthiness beyond just transparency, fairness, and accountability, specifically, to consider justice and shifting power to the people and disempowered as core values to any trustworthy AI system. Creating trustworthy AI starts by funding, supporting, and empowering groups like Queer in AI so the field of AI has the diversity and inclusion to credibly and effectively develop trustworthy AI. Through our years of work and advocacy, we have developed expert knowledge around questions of if and how gender, sexuality, and other aspects of identity should be used in AI systems and how harms along these lines should be mitigated. Based on this, we discuss a gendered approach to AI, and further propose a queer epistemology and analyze the benefits it can bring to AI.


Dylan Paré (University of Calgary); Scout Windsor (University of Calgary); John Craig (Queer Code Collective)

We present Mementorium, an interactive, branching narrative told in immersive, virtual reality (VR). The player uncovers the narrator’s memories of gender and sexuality-based marginalizations in STEM learning environments, moving from childhood to early adulthood. Mementorium’s design builds upon our previous designs and research on queer reorientations to computing and queer approaches to embodied learning in VR. When LGBTQ+ people’s exclusion is even acknowledged, approaches to addressing the problem often treat LGBTQ+ people as the problem: “We become a problem when we describe a problem” (Ahmed, 2017, p. 39). Framing LGBTQ+ people as the cause of their exclusion leads to solutions to entice and retain LGBTQ+ people in STEM. However, this fails to address issues that keep LGBTQ+ people from STEM fields. Mementorium aims to increase understanding of interpersonal and systemic factors contributing to LGBTQ+ exclusion from STEM learning and professions and encourage more expansive thinking and action in solidarity with LGBTQ+ people. Mementorium tells the story of a queer, nonbinary person interested in learning about technology but faces barriers to participation due to normative and oppressive ideas about gender and sexuality. Each of the memories that the player uncovers has three branching points in the narrative. First, the player uncovers the memory, revealing the harm caused by marginalization. Next, the player chooses their reaction to the situation that reorient players to the narrator’s experiences. Finally, the player chooses a future-oriented response to direct the narrator’s actions, offering choices for individual or group-oriented action or action on a larger scale of social change. We are researching Mementorium to see how players make sense of LGBTQ+ marginalizations as individual and systemic issues and how to reorient players toward counter-hegemonic actions that support marginalized people.


Anonymous author

This work studies publications in the field of cognitive science and utilizes natural language processing (NLP) and graph theoretical techniques to connect the analysis of the papers' content (abstracts) to the context (citation, journals). We apply hierarchical topic modeling on the abstracts and community detection algorithms on the citation network, and measure content-context discrepancy to find academic fields that study similar topics but do not cite each other or publish in the same venues. These results show a promising, systemic framework to identify opportunities for scientific collaboration in highly interdisciplinary fields such as cognitive science and machine learning.


Safinah Arshad Ali (MIT)

This submission is a poetry reflecting on classification and belongingness.


Milind Agarwal (Johns Hopkins University)

A substantial majority of the world’s languages have no language technologies and NLP toolkits at all. With an increasing reliance on technology and the web, depriving people access to technology in their native language is indirectly causing a loss of language, culture, traditions, linguistic information, and a diminishing richness of the human experience. This harsh reality marks the 21st century as a pivotal time for researchers and engineers in NLP. As per linguists, nearly half of the world's 7000 languages will be extinct before the end of this very century. But what if the advances in natural language processing and computational linguistics could help us change course? There has been a wide range of efforts by research groups on low-resource and resource-poor languages for the purposes of machine translation, and on endangered languages for the purposes of documentation and preservation. But despite numerous efforts in the field, there is a lack of a clear sense of direction and a unified front to tackle this problem. This paper hopes to unravel the diverse computational efforts being undertaken for low-resource, resource-poor and endangered language research, the different data resource creation and extraction techniques, and modern deep learning and statistical models being used specifically for this domain

Code of Conduct

Please read the Queer in AI code of conduct, which will be strictly followed at all times. Recording (screen recording or screenshots) is prohibited. All participants are expected to maintain the confidentiality of other participants.

Information about NeurIPS safety team will be added soon, in case you need assistance before the conference with matters pertaining to the Code of Conduct or harassment, please contact the Queer in AI Safety Team. Please be assured that if you approach us, your concerns will be kept in strict confidence, and we will consult with you on any actions taken.

Speakers and Panelists

Panel: Caste in Institutions and Tech

Vipin P. Veetil

Vipin is an economist with a PhD from George Mason University. He works in the area of monetary and macroeconomics. He currently works at the Indian Institute of Technology Madras.

Nikita Sonavane (she/her)

Nikita Sonavane has worked as a legal researcher and an advocate for over three years. She is the co-founder of the Criminal Justice and Police Accountability Project (CPAProject) a Bhopal based litigation and research intervention focused on building accountability against criminalisation of marginalised communities by the Police and the criminal justice system. Her writings have been at the intersection of policing, caste and digitisation of the criminal justice in India. Nikita has previously worked as a Research Associate with the Centre for Social Justice (CSJ), Ahmedabad, on issues of local governance, forest rights, and gender in the Adivasi region of Dang in Gujarat. She graduated with a B.A. (Political Science) degree from St. Xavier’s College, Mumbai and an LL.B. degree from Government Law College, Mumbai in 2016. Nikita holds an LL.M in Law and Development degree from Azim Premji University (APU), Bangalore. Her writings have been published by the AI Now Institute at NYU, Indian Express, the Hindu, Caravan among others.

Palashi V (she/they)

Palashi is a PhD Candidate in Information Science at Cornell University. Her dissertation is an ethnography of locating caste and its relationship with gender in computing cultures of India and the Indian diaspora. Her project is a caste-critical analysis of upper-caste subjectivity in computing, specifically in women-in-technology initiatives, and how it shapes the experience of Dalit engineers. She is an engineer turned interdisciplinary scholar of the social and cultural worlds of computing at the intersection of Information Science, Anthropology, STS and Feminist Studies.


Her work has been published in CSCW, CHI, thewire.in, and other venues and has been supported by the Social Science Research Council, Cornell Institute of Social Sciences, Mario Einaudi Center for International Studies, Mellon Foundation and University of Siegen. She is a student and early-career representative of the Feminist Scholarship Division of the International Communication Association. She has a Bachelors in Technology in Information Communication Technology from DA-IICT, India. She has previously worked at a Big Four technology consulting firm and in organizations focused on feminist technologies in India. She tweets intermittently at @lapshiii.

Akhil Kang (he/they)

Akhil Kang is a Ph.D. candidate in Socio-cultural Anthropology at Cornell University. His Ph.D. project focuses on the anthropology of the elite. He is academically and politically interested in shifting the anthropological gaze away from lower caste individuals and understands victimhood and woundedness as articulated by upper caste individuals/savarnas. He is an interdisciplinary scholar working at the intersection of several fields including feminist and queer studies; affect and media studies, postcoloniality and biopolitics. He is currently conducting his fieldwork in parts of North India and his fieldwork is supported by the Wenner Gren Dissertation Fieldwork Grant.

Prior to enrolling at Cornell, Akhil received his B.A. LLB (Hons.) from NALSAR University of Law, Hyderabad, and is a registered Advocate with the Bar Council of Delhi. Born and raised in Jalandhar (Punjab, India), he has been involved in queer and anti-caste activism and human rights lawyering. He has worked on several projects including, the role of men and masculinity in child marriages in India (with AJWS), feminist law archiving (with McArthur Foundation), and understanding gender and the sexual in institutional student movements & political formations in India (with Ford Foundation). He writes about sex, desires and politics at https://www.desi-underground-gay.com/

Resources for Caste

1) The Annihilation of Caste (Print) Version
https://ccnmtl.columbia.edu/projects/mmt/ambedkar/web/readings/aoc_print_2004.pdf

2) Unlearning Caste Supremacy Reading List by Equality Labs https://www.equalitylabs.org/castereadinglist

3) Dalit / Queer Literature

4) Birds of a Caste - How Caste Hierarchies Manifest in Retweet Behavior of Indian Politicians | Proceedings of the ACM on Human-Computer Interaction

Panel: Towards Animal-Centric AI

Sara Beery (she/her/hers)

Sara Beery has always been passionate about the natural world, and she saw a need for technology-based approaches to conservation and sustainability challenges. This led her to pursue a PhD at Caltech, where her research focuses on computer vision for global-scale biodiversity monitoring. She works closely with Microsoft AI for Earth and Google Research to translate her work into usable tools. Sara’s experiences as a professional ballerina, a queer woman, and a nontraditional student have taught her the value of unique and diverse perspectives in the research community. She’s passionate about increasing diversity and inclusion in STEM through mentorship and outreach.

Luisa Ruge (she/her)

Animal & Human Centered Designer / PhD in Animal Computer Interaction


Luisa Ruge is Colombian-American user centered designer with over 15 years experience helping companies of all sizes across the Americas and Europe develop delightful product experiences. Some of her previous human-based work includes working at a Bogotá based company builder where she co-lead multidisciplinary teams in the design of start-ups for the Latin American market, collaborating with the US State Department and the James Beard Foundation in creating the exhibit strategy for the US Pavilion at ExpoMilan 2015, establishing the design team at an industry leading, US based, consumer goods company, being an adjunct design department faculty member at the Illinois Institute of Technology, and leading color, material, and finish strategy, and product development projects at a major global appliance company in both Europe, Australia, and the US.


Eight years ago, Luisa decided to pursue her goal of increasing animals' wellbeing through design by broadening her scope of the users she works with to include animals. Doing so has included her becoming a certified mobility assistance dog trainer, working at a dog day care, helping design the training facility for dogs who help veterans, and completing a PhD from the Open University in Animal Computer Interaction. Most recently she is an independent animal centered design consultant and the co-founder of Scout9, a company which aims to empower pet partners by helping them make better and more informed decisions on their dog's behalf .

Brian Aldridge

Clinical and Health Innovation Professor, College of Veterinary Medicine and Carle Illinois College of Medicine, UIUC, IL, USA.


Dr Brian Aldridge currently serves as a Clinical and Health Innovation Professor at the University of Illinois with a joint appointment between the College of Veterinary Medicine and Carle Illinois College of Medicine. He holds affiliate positions at the Institute of Genomic Biology, the National Center for Supercomputer Applications and the Center for Digital Agriculture. His scholarly efforts focus on animal health defense in young and growing animals with a particular interest in mucosal immunology, respiratory and gastrointestinal health, and the early detection of health failure. As a member of the University of Illinois AI FARMS team, Brian provides expertise and leadership in the area of detection and interpretation of host responses to health challenges. Through his clinical, research and educational outreach activities Brian maintains long term, productive relationships with partners throughout the veterinary and human health industries and focuses his discovery efforts on projects that can help transcribe clinical data into intelligence that informs effective management decision making.

Brian has used his passion for teaching and learning to assist in the development and implementation of high impact educational programs around the world. He has helped new veterinary and medical school curricular at institutions in California, Illinois, the United Kingdom, and south-east Asia, and received numerous teaching awards. He was the co-developer of a Massive Open Online Course entitled Sustainable Food Production Through Livestock Health Management which had over 20,000 participants from 100 countries, over 35% of which were from developing nations. He has also helped develop a series of online Continuing Education and Graduate learning programs at the iLearning Center at UIUC.

Michelle Lee Brown (she/her or they/them)

Michelle Lee Brown is the Assistant Professor of Indigenous Knowledge, Data Sovereignty, and Decolonization at Washington State University, in their Digital Technology and Culture Program. She is a recent Eastman Fellow at Dartmouth College in their department of Native American and Indigenous Studies, and recently completed her PhD in the Indigenous Politics and Futures Studies programs in the Political Science Department at the University of Hawaiʻi at Mānoa. Her work articulates Indigenous political praxis and futures through digital SF; she is currently working on a VR project on water, eels, and relationality, and a comic based on multiple levels of impostor syndrome. More about her practice and praxis can be found at www.michelleleebrown.com.

Euskalduna from Lapurdi (Biarritz/Miarritze Côte des Basques), she grew up on Wampanoag territories around Buzzards Bay and now lives on Umatilla, Cayuse, and Walla Walla lands and waters. She strives to uphold her relational commitments to these communities and is grateful to be working with her fellow panelists to imagine and build otherwise.

Julia Ling (she/her)

Dr. Julia Ling is a tech lead on the Tidal project at X, the Moonshot Factory. She leads the software and machine learning team at Tidal, a project whose mission is to protect the ocean while feeding humanity sustainably. Prior to joining Tidal, Julia was the CTO at Citrine Informatics, a 70+ person start up in the Bay Area building an AI platform to accelerate new materials development. She's a recognized leader in applying machine learning to scientific applications. She holds a PhD in Mechanical Engineering from Stanford University and a Bachelors in Physics from Princeton University.

Panel: Neurodiversity

Naba Rizvi


Naba Rizvi is a 2nd year PhD student at UC San Diego. Her research focuses on using social signal processing and participatory design to identify and mitigate implicit bias in healthcare.

Lydia smiles and tilts their head slightly to the side, looking confidently at the camera. They are a young-ish East Asian person with a streak of teal in their short black hair, wearing glasses, a cobalt blue jacket and navy tie, with a blue copper wall behind them. Photo by Sarah Tundermann.

Lydia X. Z. Brown (they/them/theirs/themself or no pronouns)


Lydia X. Z. Brown is a Policy Counsel with CDT’s Privacy and Data Project, focused on disability rights and algorithmic fairness and justice. Their work has investigated algorithmic harm and injustice in public benefits determinations, hiring algorithms, and algorithmic surveillance that disproportionately impact disabled people, particularly multiply-marginalized disabled people. Outside of their work at CDT, Lydia is an adjunct lecturer and core faculty in disability studies at Georgetown University, and the founding director of the Fund for Community Reparations for Autistic People of Color’s Interdependence, Survival, and Empowerment. They serve on the American Bar Association’s Commission on Disability Rights, co-chair the ABA Section on Civil Rights and Social Justice’s Disability Rights Committee, serve as co-president of the Disability Rights Bar Association, and represent the Disability Justice Committee on the National Lawyers Guild’s board. Lydia is a founding board member of the Alliance for Citizen-Directed Supports, and serves on several advisory committees, including for the Law and Politics of Digital Mental Health Technology project at the University of Melbourne, the Lurie Institute for Disability Policy at Brandeis University, and the Coelho Center for Disability Law, Policy, and Innovation at Loyola Law School. Before joining CDT, Lydia worked on disability rights and algorithmic fairness at Georgetown Law’s Institute for Tech Law and Policy. Lydia has spoken internationally and throughout the U.S. on a range of topics related to disability rights and disability justice, especially at the intersections of race, class, gender, and sexuality, and has published in numerous scholarly and community publications. In 2015, Pacific Standard named Lydia to its list of Top 30 Thinkers in the Social Sciences Under 30, and Mic named Lydia to its inaugural list of 50 impactful leaders, cultural influencers, and breakthrough innovators for the next generation. Most recently, Gold House Foundation named Lydia to its A100 list of America’s most impactful Asians for 2020.

Panel: Labor Activism

Raksha Muthukumar (she/her)

Raksha Muthukumar is a founding organizer of the Alphabet Workers Union, the union for all workers under Google/Alphabet. She formerly worked as a software engineer at Google but recently left to pursue her activism full time. She is currently working at the YA-YA Network, an organization dedicated to empowering marginalized youth with the skills they need to become the activists of the future. Raksha is a deep believer in the power of storytelling, and she is passionate about sharing stories at the intersections of tech and labor and queerness - her published writings and podcast can be found on her website, www.raksha.gay. Raksha intends to be involved with the tech labor struggle as well as other leftist & abolitionist organizing for the foreseeable future.

Andrea Haverkamp (she/they)

Dr. Andrea Haverkamp (she/they) is a queer labor organizer, activist, and feral engineering academic. She holds a Ph.D. in Environmental Engineering with a doctoral minor field in Queer Studies from Oregon State University. Her research explores the experiences of transgender and gender nonconforming students in engineering and computer science, such as their sources of community support and collective resiliency, as well as the connections between anti-trans discourse and white nationalist radicalization pathways in STEM and nerddom cultures online. Dr. Haverkamp serves on the editorial board of the International Journal of Engineering, Social Justice, and Peace, an open-source scholarly publication exploring the intersections of engineering and inequity. Before her Ph.D., Dr. Haverkamp worked as an engineer in the Federal government and served as a science education volunteer with the Peace Corps in Liberia. She is currently a labor organizer in the healthcare sector based in Seattle.

Levin Kim (they/them)

Levin (they/them) is currently based in Seattle, Washington. Broadly, Levin's work examines the intersection of technology, power, and bodies through different collaborative projects in academic research, creative work, and community organizing. They are pursuing a PhD in Information Science and organizing with UAW 4121, the union of Academic Student Employees at the University of Washington. In the past, Levin has worked at the Berkman Klein Center for Internet & Society on the Ethics and Governance of AI Initiative, and graduated from the University of Michigan with a B.A. in Women's and Gender Studies and a B.A. in Drama, with minors in Computer Science and Art & Design. More of their work can be found at: www.levinishere.com, and on Twitter at @levinishere.

Social: Critical Approach to Algorithmic Fairness

Anaelia (Elia) Ovalle (they/she)

Elia is a PhD student in Computer Science at UCLA at the intersection of representation learning and algorithmic fairness with professors Majid Sarrafzadeh and Kai-Wei Chang. They are fascinated by the ways machines capture and represent information across various modalities, thinking about how representations impact an algorithm's fairness and robustness downstream. With particular interest in serving underrepresented communities, Elia enjoys both applying their research to the LGBTQIA+ population and minority health space. Previous to UCLA, they worked as a data scientist at Unity Technologies. Since then, Elia has also interned at Amazon (Prime Video) and Facebook (Responsible AI), seeking to measure and mitigate disparate outcomes within machine representations.

Elle Lett, PhD (Dr./she/her)

Elle Lett is a Black, transgender woman, statistician-epidemiologist and physician-in training. Through her work, she applies the theory and principles of Black feminism to understanding the health impacts of systemic racism, transphobia, and other forms of discrimination on oppressed groups in the United States. She holds a PhD in epidemiology, and master’s degrees in Statistics and Biostatistics. To date, her work has focused on intersectional approaches to transgender health and the health impacts of systemic racism, as demonstrated by state-sanctioned violence. Now, she is turning her focus to algorithmic fairness in clinical prediction models and mitigating systems of inequity in health services provision.

Organizers

Claas Voelcker (he/him)

Claas is a PhD student at the University of Toronto, PAIR lab. He is interested in modeling complex data for reinforcement learning and control applications as well as probabilistic sequence models. At Queer in AI, he administrates the mailing list, and organizes conference workshops and socials at ML and robotics venues.


Arjun Subramonian (they/them)

Arjun is a brown queer, agender PhD student at the University of California, Los Angeles. Their research focuses on trustworthy graph machine learning and NLP. They are a core organizer of Queer in AI, D&I chair for NAACL 2022, co-founded QWER Hacks, and taught machine learning and AI ethics at Title I schools in LA. They also love to run, hike, observe and document wildlife, and play the ukulele!


Ashwin (they/them)

Ashwin is a Research Associate at Precog and Language Technologies Research Center, IIIT Hyderabad. They use both qualitative and computational methods to understand phenomenon on social media platforms and computer-mediated communication technologies, with the goal to make these systems more inclusive and less harmful for the margins of society.



Sharvani Jha (she/her)

Sharvani is a computer scientist who likes waffles, whale sharks, waves (EMIC and ocean), and alliteration (UCLA CS 21). She is a co-founder of QWER Hacks and worked on ELFIN UCLA Cubesat + SWE UCLA + ACM UCLA. She is working on learning more about AI ethics and applications of ML to space weather.


Umut Pajaro Velasquez (they/them - Spanish: elle/le)

Umut is a black latinx Caribbean queer that has a Bachelor in Communications Studies at the University of Cartagena (Colombia) and an MA in Cultural Studies at the National University of Rosario (Argentina). Their main research focus has been LGBTQI issues and Queer representation in media. In the last couple of years, they started to focus on gender-diverse representation online and also on topics related to Artificial Intelligence, Ethics, and Social Computing as Independent Researcher. They also are a student of the MSt in AI Ethics and Society at the University of Cambridge (UK).

Important Notes:

  • NeurIPS is still determining how and to whom to offer free registration for affinity groups. We will share the information once available.

Call for Contributions (CLOSED)

The submissions must be generally related to the intersection of LGBTQIA+ representation and AI, or be research produced by LGBTQIA+ individuals. The submissions need not be directly related to the themes of the workshop, and they can be works in progress. Please refrain from including personally identifying information in your submission. No submissions will be desk-rejected.

We will open the call on Monday, September 20, 2021 and close it on Monday, November 15, 2021 Anywhere On Earth (AOE), with acceptance notifications going out on a rolling basis. Additionally, we are accepting submissions in any media, including---but not limited to---research papers, books, poetry, music, art, musings, tiktoks, testimonials. Submissions need NOT be in English. This is to maximize the inclusivity of our call for submissions and amplify non-traditional expressions of what it means to be Queer in AI. You can find excellent examples of “non-traditional” submissions here.

Furthermore, we encourage all undergraduates to submit their work. We will also publicize outstanding undergraduate research.

We will work to grant all individuals with accepted work free conference admission. All authors with accepted work will have FULL control over how their name appears in public listings of accepted submissions. Tentatively, the submissions will be presented in a joint affinity group poster session where you will have the option to present and answer questions in a Gathertown setting.

If you need help with your submission in the form of mentoring or advice, you can get in touch with us at queerinaineurips2021@gmail.com.

Submission link: https://cmt3.research.microsoft.com/qinaineurips2021/