Queer in AI Workshop @ ICML 2021

All recordings can be found here.

About

Queer in AI’s demographic survey reveals that most queer scientists in our community do not feel completely welcome in conferences and their work environments, with the main reasons being a lack of queer community and role models. Over the past years, Queer in AI has worked towards these goals, yet we have observed that the voices of marginalized queer communities - especially transgender, non-binary folks and queer BIPOC folks - have been neglected. The purpose of this workshop is to highlight issues that these communities face by featuring talks and panel discussions on the inclusion of non-Western non-binary identities; and Black, Indigenous, and Pacific Islander non-cis folks.

We will explore some of the following topics, with an overarching theme of trans and non-binary identities:

  • Creating safer spaces for trans and non-binary folks in AI: A broad discussion of intersection of trans and non-binary identities and human-centered AI systems, and raising awareness of issues that trans and non-binary folks face in academia and industry.

  • Trans-inclusive publishing landscape: Discussion of best practices and obstacles for name change policies in publications and mitigating deadnaming in citations. (Reference)

  • Black, Indigenous, Latinx and Pacific Islander trans and non-binary people: Sharing experiences and raising awareness regarding racism and transphobia in queer community.

  • Gender across cultures: Learning about various cultures and understanding issues of marginalized non-cis identities, which are generally misrepresented.


Additionally, at Queer in AI’s socials at ICML 2021, we will focus on creating a safe and inclusive casual networking and socializing space for LGBTQIA+ individuals involved with AI. We will further offer opportunities for attendees to share their backgrounds and experiences through storytelling. Together, these components will create a community space where attendees can learn and grow from connecting with each other, bonding over shared experiences, and learning from each individual’s unique insights into AI, queerness, and beyond!

Contact Us

Email: queerinaiicml2021@gmail.com

Code of Conduct

Please read the Queer in AI code of conduct, which will be strictly followed at all times. Recording (screen recording or screenshots) is prohibited. All participants are expected to maintain the confidentiality of other participants.

ICML 2021 adheres to the ICML code of conduct and Queer in AI adheres to Queer in AI Anti-harassment policy. Any participant who experiences harassment or hostile behavior may contact he HR Liaison via the ICML Hotline at either ICMLhotline@gmail.com or 858-208-3810, or contact the Queer in AI Safety Team. Please be assured that if you approach us, your concerns will be kept in strict confidence, and we will consult with you on any actions taken.

Structure

The workshop and social dates are below. We will have our joint poster session with other affinity groups on Monday, July 19. On Tuesday, July 20, we will run a workshop (Pacific Daylight Time), occurring over Zoom with the following tentative (the indicated timings are in PDT). If you have suggestions to improve the schedule, please reach out to us at queerinaiicml2021@gmail.com.

The sign up form for the socials can be found here.

Monday, July 19

06:00 - 08:00 Social: AI for Biodiversity (Sara Beery)

18:00 - 20:00 Joint Poster Session (see Accepted Works below)

Tuesday, July 20

08:00 - 08:40 Panel Discussion: Gender Across Cultures (Shubha Chacko, Umut Pajaro)

08:45 - 09:00 Small-Group Discussion

---

13:00 - 13:10 Introduction (Land Acknowledgement, Overview of Demographic Survey, Queer in AI Initiatives, Code of Conduct)

13:10 - 14:00 Talk: Advocating for Trans Inclusive Name Change Policies and Practices in Academic Publishing (Dr. Tess Tanenbaum)

14:10 - 14:30 Talk: Non-Binary Representation in AI (Lelia Marie Hampton)

14:35 - 14:55 Talk: Queer in AI Inclusive Conference Guide & Code of Conduct Reminder (MaryLena Bleile, Arjun Subramonian)

15:10 - 15:55 Intersectionality Gathering/Community Storytelling Session (Elia Ovalle)

16:00 - 16:40 Panel Discussion: Creating safer spaces for trans and non-binary folks (Belén Giménez, Fernanda Carles, Kendra Albert)

16:45 - 17:00 Small-Group Discussion

Friday, July 23

18:00 - 20:00 Social: Storytelling: Intersectional Queer Experiences Around the World (Shubha Chacko)

Queer in AI Buddy Program

One of the major struggles of being LGBTQ+ in AI and surrounding fields is the sense of isolation; feeling like you’re different and alone contributes to minority stress (Frost, et al 2015, Meyer, 2003a, 2003b, Meyer & Dean, 1998). Furthermore, in Queer in AI’s demographic survey, a common issue that was brought up by participants is lack of community support and role models (Queer in AI, 2019). Dealing with this sort of isolation as well as potential for harassment - and navigating outness/figuring out who to trust - can make academic conferences more stressful for and intimidating to new participants than they already are.


To remedy this, we introduce the buddy system: pairing experienced LGBTQ+ participants with newer people can help alleviate some of these associated struggles. Shared LGBTQ+ identity or known allyship removes the burden of solitary outness-navigation to an extent. Furthermore, experienced people can show younger individuals the ropes while providing a social “safety net”, so that the newer person isn’t just left to figure things out alone. As a side effect, the buddy system also provides an excellent networking opportunity.


We will match people together based on shared time zone and language. We also acknowledge the importance of intersectionality and will aim to match people together based on other axes as well, e.g. matching trans people together. However, this is not a guarantee as its feasibility depends on how many people apply (and the variability of characteristics of those people).


Guidelines & Things to Consider

  • You’ll be introduced to your buddy shortly before the conference. If you’re a senior person, please consider making the first move to connect!

  • You’ll be invited to the Queer in AI Slack where you can communicate with each other (as an alternative to email), if you would like to do so

  • We recommend setting up a meeting with your buddy before the conference starts so that you can introduce each other and make a plan

  • Staying connected after the conference is possible, but not guaranteed since many individuals experience zoom fatigue

The sign up form for the buddy program can be found here. We encourage signing up especially if you are an undergraduate, or new to ML conferences or queer spaces. You do not need to be registered for the conference to participate in the buddy program.

Speakers and Panelists

Anaelia (Elia) Ovalle (they/he/she)

Elia is a 3rd year CS PhD student @ UCLA studying algorithmic bias and representation learning. Through this, they seek to empower historically marginalized groups including but not limited to ethnic minorities and LGBTQIA+ folks.

Arya Jeipea Karijo (she/her)

Arya Jeipea Karijo is a trans woman in Kenya working at the intersection of human rights, LGBTIQ rights, feminism and gender equality. She is a User Experience researcher and designer building for simplicity of human lives - applications, experiences & interventions for people’s resilience. Arya does communication for UHAI - EASHRI an indigenous LGBTIQ funder in East Africa. She has in the past worked for openDemocracy as a feminist investigative journalist and also works for Whose Knowledge? a global campaign to center the knowledge of marginalized communities (minoritized majority) on the internet.

Belén Giménez (she/her)

Belén Giménez is from Asunción, Paraguay. She has a B.A in Psychology from Lewis & Clark College in Portland, OR, USA and is currently studying a Human-Computer Interaction (HCI) Masters Program at Siegen Universität in Siegen, Germany. Her main interests are how interactions with and through technology have an impact on individual and collective human behavior, and she explores this through the analysis and development of socio-technical systems and research related to Feminist and Queer HCI.

Fernanda Carles (she/her)

Mechatronic Engineering student at the Faculty of Engineering of the National University of Asuncion (FIUNA). She worked as a coordinator and educator in projects regarding education with technology, maker culture and digital fabrication. Feminist and activist for the reduction of the gender digital gap, she is a member of Girls Code and Django Girls chapter Asuncion. She is currently working on her thesis in Data Science implemented in education.

Kendra Albert (they/them/their)

Kendra Albert is a clinical instructor at the Cyberlaw Clinic at the Berkman Klein Center for Internet & Society at Harvard University, where they teach students to practice technology law by working with pro bono clients. Kendra also publishes on gender, adversarial machine learning, and power, in various combinations. They hold a law degree from Harvard Law School, serve on the board of the ACLU of Massachusetts and the Tor Project, and are also a legal advisor for Hacking // Hustling. Kendra enjoys playing video games, coming up with ways to redistribute institutional wealth, and watching people in power squirm.

Lelia Marie Hampton (they/them)

Lelia Marie Hampton is a Ph.D. student in computer science.

Sara Beery (she/her/hers)

Sara Beery has always been passionate about the natural world, and she saw a need for technology-based approaches to conservation and sustainability challenges. This led her to pursue a PhD at Caltech, where her research focuses on computer vision for global-scale biodiversity monitoring. She works closely with Microsoft AI for Earth and Google Research to translate her work into usable tools. Sara’s experiences as a professional ballerina, a queer woman, and a nontraditional student have taught her the value of unique and diverse perspectives in the research community. She’s passionate about increasing diversity and inclusion in STEM through mentorship and outreach.

Shubha Chacko (she/her)

Shubha Chacko is a joyful activist who draws strength, knowledge, and warmth from the strong alliances and friendships forged with people from different walks of life. She is the Executive Director of Solidarity Foundation, an NGO that supports grassroots level organisations of sexual and gender minorities (LGBTIAQ+) and sex workers by building collectives, capacities and connections. She has been recognized as a global diversity leader (Times Ascent Award) at World HRD Congress, Mumbai 2017. Shubha is also a researcher and has authored books, reports and articles and has been an invited speaker at many national and international conferences. She has a Master's degree in Social Work from Tata Institute of Social Sciences.

Theresa Jean Tanenbaum ("Tess") (she/her/hers)

Dr. Theresa Jean Tanenbaum (“Tess”) is a researcher, scholar, teacher, designer, artist, tinker, maker, and activist who uses digital storytelling and games to help people transform their perspective on the world and their place in it to bring about positive change. She is the founder of the Name Change Policy Working Group, and has worked with COPE, the ACM, SAGE, Springer, Taylor & Francis, Elsevier, and many other publishers to develop identity practices in publishing that safeguard the privacy of transgender authors seeking to update their scholarly records to reflect their correct names.

Umut Pajaro (they/them)

Umut is a Bachelor in Communications Studies at the University of Cartagena (Colombia) and MA in Cultural Studies at National University of Rosario (Argentina). Their main research focus have been LGBTQI issues and Queer representation on media. In the last couple of years. being part of the Youth Special Interest Group from Internet Society (ISOC), they started to focus on gender diverse representation online and also on topics related Artificial Intelligence, Ethics and Social Computing.

Accepted Works

  • Extended Abstract: The Affective Growth of Computer Vision by David Crandall, Norman Su

    The success of deep learning has led to intense growth and interest in computer vision and machine learning, along with concerns about their potential impact on society. Yet, we know little about how these changes have affected the people that research and practice these fields: we as a community spend so much effort trying to replicate the abilities of humans, but so little time considering the impact of this work on ourselves. In this extended abstract, we briefly summarize a study (which will appear as a full paper in CVPR 2021) in which we asked computer vision and machine learning researchers and practitioners to write stories about salient events that happened to them. Our analysis of over 50 responses found tremendous affective (emotional) strain in the community. While many describe excitement and success, we found strikingly frequent feelings of isolation, cynicism, apathy, and exasperation over the state of the field. This is especially true among people who do not see themselves as part of the "in-crowd." We argue that these feelings are closely tied to the kinds of research and professional practices now expected in computer vision and machine learning. As a community with significant stature, we need to create a more inclusive culture that makes transparent and addresses the real, emotional toil of its members.

  • On the Shared Teleology of Mathematics and Music: Theory, Applications, and Perspectives by MaryLena Bleile

    This paper characterizes the main tenets of my philosophy on music, mathematics, and language, as informed by my experience in both fields. Motivated by several real-world issues, I provide examples of the relevance and importance of the theory (including a discussion of an interesting mathematical result in the field of statistics and PDEs), and conclude with a discussion of how these issues can be resolved.

  • The Gender Panopticon: AI, Gender, and Design Justice by Sonia Katyal (She/her), Jessica Jung (She/her)

    Using recent research from data scientists and technologists, this article argues that we are at a contradictory moment in history regarding the intersection of gender and technology, particularly as it affects LGBTQ+ communities. At the very same moment that we see the law embracing more and more visibility regarding gender identities and fluidity, we also see an even greater reliance on surveillance technologies that are flatly incapable of working beyond the binary of male and female. These technological limitations become even more fraught in today’s age, where we face an unprecedented degree of surveillance – gender-related, and otherwise--than we have ever seen in history. When a binary system of gender merges with the binary nature of code, the result fails to integrate LGBTQ communities, particularly nonbinary and transgender populations, erasing them from view.

Using insights from a wide range of studies on artificial intelligence technologies – automated body scanners, facial recognition, and content filtering on social media, we argue in this Article, that we need to grapple with the reality that the relationship between technology and gender is far more complicated than the law currently suggests. Technology companies, along with multiple courts, colleges, and workplaces, must realize that the binary presumptions of male and female identity are largely outdated for some, and often fail to capture the contemporary complexity of gender identity formation. The question for legal scholars and legislatures is how the law – and technology -- can and should respond to this complexity. In the final sections, we discuss some of the legal implications of these technologies of surveillance, looking at both law and the design of technology, and turn to some of the normative possibilities to develop greater equality and gender self-determination.

  • Lips by Val Elefante (she/her)

    Lips (lips.social) is a new, alternative social media platform designed by women, non-binary folks, & the LGBTQIA+ community. Our patent-pending machine learning and blockchain technologies combined with our proactive feminist moderation policy create a space online without the biased censorship, harassment, and plagiarism that mainstream platforms currently enable. We built Lips according to the principles of Design Justice, hosting co-design sessions with groups of artists, sex workers, educators, activists, and sex-positive brands - some of the most marginalized folks on mainstream platforms. Then, taking everything we learned, we built a platform where creators from these communities can finally feel safe enough to really, truly thrive. When it comes to AI, we have learned that over 73% of LGBTQIA+ content online is wrongly flagged as "inappropriate," and we are excited to be building a smarter, more nuanced moderation algorithm that actually understands and accepts sexuality and queerness. When marginalized people build technology, they are more ethical, more secure, more inclusive - and ultimately better for everyone. To be sure to remain accountable to our community, we are currently crowdfunding on Wefunder: http://wefunder.com/lips. Check us out!

  • Can You Explain That, Better? Comprehensible Text Analytics for SE Applications by Huy Tu

    Text mining methods are used for a wide range of Software Engineering (SE) tasks. The biggest challenge of text mining is high dimensional data, i.e., a corpus of documents can contain 10^4 to 10^6 unique words. To address this complexity, some very convoluted text mining methods have been applied. Is that complexity necessary? Are there simpler ways to quickly generate models that perform as well as the more convoluted methods and also be human-readable?

To answer these questions, we explore a combination of LDA (Latent Dirichlet Allocation) and FFTs (Fast and Frugal Trees) to classify NASA software bug reports from six different projects. Designed using principles from psychological science, FFTs return very small models that are human-comprehensible. When compared to the commonly used text mining method and a recent state-of-the-art system (search-based SE method that automatically tunes the control parameters of LDA), these FFT models are very small (a binary tree of depth d=4 that references only 4 topics) and hence easy to understand. They were also faster to generate and produced similar or better severity predictions.

Hence we can conclude that, at least for datasets explored here, convoluted text mining models can be deprecated in favor of simpler methods such as LDA+FFTs. At the very least, we recommend LDA+FFTs (a) when humans need to read, understand, and audit a model or (b) as an initial baseline method for the SE researchers exploring text artifacts from software projects.

  • Examining narratives of “progress” in AI by Leif Hancox-Li

    Popular narratives of the recent history of AI postulate a series of AI “winters” and “springs”, culminating in the current AI “spring” of widespread excitement over what is perceived to be rapid progress in AI. I examine accounts in textbooks, industry publications, and research papers of how this “progress” or “AI spring” is characterized. In doing so, I reveal a kind of strategic ambiguity in the definitions of “progress” that are at work---a shifting between different definitions that suit the goals of different actors. This allows AI as a field to claim that progress is being made regardless of whether recent developments support the promises made by the field.

  • Crowdsourcing a Corpus of Dogwhistle Transphobia by Paige Yes Treebridge (she/her)

    Simply put, cissexism is the systemic erasure of trans people from existence, particularly in language. Transphobia is often presented as a cissexist expression, rooted in the fear of the difference of trans lives from cisnormative lives. Social media spaces are somewhat anonymous and present ample opportunity for people to be overtly transphobic, to speak out against trans people and our rights. Trans people are regularly and frequently subject to slurs and misgendering that, even if reported, are often ignored by Twitter administration. Beyond obvious slurs and misgendering there is speech that is perceived by trans people to be transphobic but hidden in word choices that may not seem transphobic to non-trans people. The popular (and research-based) term for this language is dogwhistle. It means speech which is intended to communicate a message to a specific informed group while hoping to avoid detection by people uninformed about the message or its (often controversial) context. I believe dogwhistle transphobia exists on Twitter, I have found some evidence of other trans people who see the same statements as dogwhistle statements, and my research will document dogwhistle transphobic tweets, as perceived and validated by trans people, into a corpus, or collection of naturally occurring text. The research will take place via snowball sampling to avoid biasing the corpus or exposing the participants of the study to risk.

  • Feature Genuinization based Residual Squeeze-and-Excitation for Audio Anti-Spoofing in Sound AI by Ruchira Ray

    Voice modality in human-machine interaction has gained popularity in the last decade due to advances in voice technology. All digital devices support voice as input while employing voice assistants. It is the most used way of interaction in headless digital appliances and IoT devices. Emerging audio spoofing techniques pose a significant threat to Automatic Speaker Verification (ASV). False wakeup of voice assistants and their response on recorded audio replay imposes security concerns and customer’s hesitancy. As applications of ASV and replay detection are ubiquitous, it is essential to make these systems robust. We propose a two-stage hybrid model: genuinization transformer to efficiently differentiate between the distribution of synthetic and genuine speech and non-speech audio, followed by Residual Squeeze and Excitation networks (ResSEnet) to learn relevant latent features and classify audio input as spoofed and bonafide. To handle both speech and non-speech audio sounds effectively, we use log-mel features. The proposed model is evaluated using the ASVspoof 2019 Logical Access (LA) dataset. Experimental results show that our proposed model significantly elevates performance compared to the baseline and state-of-the-art models.

  • Is digital colonisation redefining the understanding of agency, bodily autonomy and being human? by Brindaalakshmi. K (they/them)

    This essay explore the impact of datafication on the agency and bodily autonomy of gender and sexual minorities (GSM) in India. The use of patriarchal standards for GSM community to enter data systems impacts the data sets that are fed into Automated Decision Making Systems (ADMS). The essay explores the changing understanding of being human due to the introduction of bad data into ADMS.

    Gender and sexual minorities in the context of this essay includes all (married and unmarried) cis women, transgender+ persons, LGBQIA+ persons, persons with disability (because there is often erasure of their sexuality), sex workers (female and trans+), and all those who identify themselves to be non-heteronormative with respect to their gender and/or sexuality.

  • Towards Understanding and Building a Multilingual and Inclusive Web by John Samuel (they/them, he/him)

    The web is evolving at a rapid pace. Internet penetration is increasing across the world, and more mobile devices access the world-wide-web daily. There is a need for understanding the linguistically, culturally, and socially diverse world. The old "One model fits all" approach of building software and web applications alienates some communities.

Recent research works show how open and collaborative sites like Wikipedia provide a way for multiple language communities to come together and build a multilingual encyclopedia. Wikidata, which started in 2012, is significant for understanding and building multilingual websites. From multiple sub-domain websites like Wikipedia (en.wikipedia.org, fr.wikipedia.org), each managed by respective language community, to a single domain website (www.wikidata.org), the difference is indeed very huge. Multiple language communities need to express their needs to describe their local knowledge, like museums, persons, (LGBTI+) historical events, etc., on Wikidata. Furthermore, Wikidata intends to build a structured knowledge base and has different notability guidelines. These single guidelines are in contrast to the multiple notability guidelines managed by the language communities. These guidelines play an important role in documenting and improving the topics related to the minority communities, especially the LGBTI+ related topics. Though Wikidata gives the first impression of conversations happening only in the English language, the site has many options to ensure a multilingual experience. Yet, there is a scope for improvement.

An analysis of even a small dataset of multilingual information on Wikidata shows that many languages with few speakers have limited information available in their local languages. This work presents some of the recent analyses on Wikidata items and properties (e.g., WDProp). It also explores possible ways to develop language-agnostic tools for improving language and topic coverage (e.g., OpenRefine, QuickStatements, ShExStatements).

[1] Analyzing and Visualizing Translation Patterns of Wikidata Properties, John Samuel, CLEF 2018, Avignon, France, 10-14 September 2018, Lecture Notes in Computer Science, vol 11018. Springer, Cham

[2] Collaborative Approach to Developing a Multilingual Ontology: A Case Study of Wikidata, John Samuel, Metadata and Semantic Research. MTSR 2017. Communications in Computer and Information Science, vol 755. Springer, Cham

[3] ShExStatements: Simplifying Shape Expressions for Wikidata, John Samuel, Wiki Workshop 2021 (held at The Web Conference 2021), 14 April 2021

  • Queering Classifications in Recruitment AI by Eleanor Drage

    In 1982, Nancy Cantor, Walter Mischel and Michael Schwartz claimed that categorisation was a cognitive process that structures and gives coherence to our general knowledge about people and the social world. However, by exaggerating the differences between social groups while minimising the differences within them, categorisations misrepresent and make generalisations about groups and individuals, which in turn gives rise to harmful human biases and prejudices. These harms are the focus of a basic tenet of gender studies: that normative and binary categories discipline and police gender and sexual identity. Categorisations, however, are an integral part of how AI creates rules and identifies patterns or biases, with gender, along with race, age and other ‘soft biometric traits’ often used as data parameters in ML and at other stages of AI development processes. This paper draws on work from queer and gender studies, from Judith Butler and Jose Esteban Muñoz to Eve Sedgwick and Gloria Anzaldúa to investigate how this knowledge can help tackle unresolved issues faced by AI practitioners in relation to classifications and categorisations. I focus on three case studies from recruitment, a sector in which AI is increasingly deployed: 1) talent intelligence platforms which attempt to eliminate bias from hiring processes by “stripping” gender qualifiers on the front and back end of the systems; 2) the “personality profiles” created by AI-video recruiting systems that assess participants in short hiring videos, and which collect the participants’ language, facial expression, clothes and background image as data that can be compared to a training set that has been modelled on the biased judgement of human observers; 3) and the people classifications created and used by image databases which, as Kate Crawford has noted, have included the labels ‘slut’ and ‘rape suspect’. Through these examples, I address the following research questions: Firstly, if mis/representation is one of the most pre-eminent sites in the production of racial and sexual inequality (Hill Collins 1997), should we risk misclassifying an individual’s data by using gender as an input category? How can debates around the representation of Black women (hooks 1992; Gammage 2016) contribute to conversations around how algorithms at once invisibilise and hypervisibilise the oppressed? Secondly, should we be adding more gender options to input categories, and does the proliferation of gender categories actually translate into a more representative product? And lastly, how are categorisations inflected with Western cultural values? By working against the disciplinary legitimation and rigid categorisation, queer theory is well-equipped to confront the double binds, aporias and logical impasses that halt the progress of debates around categorisations in AI. This paper therefore situates itself within the emerging body of research that demonstrates how queer theory can be integrated into AI development processes to develop better, fairer technology.

  • Complicating Narratives of Afro-Asian Solidarity: A “Distant Reading” of the 1955 Bandung Conference Proceedings by Nikhil Dharmaraj (he/him)

    In 1955, Bandung, Indonesia hosted the historic Bandung conference, the first large-scale gathering of Asian and African states, many of whom newly enjoyed independence from Western empire. Coordinated by Indonesia, Burma, India, Ceylon, and Pakistan, the conference was designed to be a bastion of post-colonial solidarity against the monsters of capitalism, neoliberalism, and imperialism.

In this project, I utilize “distant reading” techniques, as described by scholar Franco Morretti, to complicate the narrative of Afro-Asian solidarity at the 1955 Bandung Conference. Despite the undeniable significance of Bandung as the first large-scale gathering of Afro-Asian states, often missing in historical interpretations of this conference is the calculus of anti-Blackness, anti-Indigeneity, patriarchy, neocolonialism, and bourgeois nationalisms — all of which destabilize idyllic understandings of post-colonial solidarity at the conference. Utilizing methods of natural language processing (NLP), I conduct a digital humanities analysis of archival documents relating to the Bandung conference to quantify the extent to which Afro-Asian solidarity was truly operational at the 1955 Bandung Conference. In substance, my work designs a novel, generalizable computational model of solidarity, amalgamating known NLP methods — such as sentiment analysis, word embeddings, implicit bias testing, and the Python question-intimacy package — to output two-dimensional quantifications of solidarity.

In digitally analyzing the proceedings of the Bandung conference in 1955, this project more broadly serves to illuminate how artificial intelligence (AI) can be repurposed as a tool of radical, decolonial storytelling. I utilize computational techniques to not only contextualize the Bandung narrative, but also, to illuminate more authentic ways of constructing networks of solidarity between minoritized communities in the current moment. My project, then, ontologically highlights the role that digital humanities/“distant reading” can play within Science and Technology Studies — that is, understanding computation as a potential means of disruption to learn from decolonial histories and literatures.

  • Posthuman Intelligence OR The Importance of Digital Agenthood by gripp

    Posthumanist theory erodes the supremacy of *human being* as a positionality, as well as the logics of a liberal humanist politic. Contemporary scholarship from black and queer theorists points out that ontologies themselves are due for an overhaul, that in fact the positionality *human being* has always been socio-systemically inaccessible to a great many individuals. It is no secret that hegemonic ideology is replicated by computational systems via biased design, implementation, and contexts of execution/usage. This paper argues in favor of a posthuman conception of intelligence, that deprivileges human as a positionality.

  • Corrective Discrimination in Repeated Bank-Lending Simulations by Emma Forman Ling (they/them)

    Research in fair machine learning largely focuses on computationally defining “fairness” notions as a property of a classifier, but while many papers discuss the real-world needs for these notions, little research has assessed their long-term implications. Understanding these implications of group fairness is essential to solve structural bias in the long run. In fact, many papers on fairness hint at corrective measures to combat structural bias but fail to define what makes them corrective. We introduce a definition of corrective discrimination as a temporal criteria that mitigates structural bias. The ultimate goal of this criteria is to achieve social equality between different groups. We run simulations to assess the performance of the equal opportunity and maximum reward policies in the bank lending example where credit scores are (1) structurally unbiased and (2) structurally biased. We find that in the unbiased condition, the maximum reward policy is corrective in our settings whereas the correctiveness of equal opportunity depended on the starting distributions. In the biased condition, correctiveness of both policies varied, and equal opportunity outperformed maximum reward more under the biased condition than without bias.

  • Automatic Gender Recognition: Perspectives from Phenomenological Hermeneutics by Yanan Long

  • Using AI tools for producing artwork that reflect LGBTQ+ struggles & triumphs

    As an international student who identifies as a queer, asian woman in STEM, I explore my intersecting identities via art.

I am experimenting how my identities could be manifested in a digital, artificial sphere. For producing my artworks, I sketch the overall outlines manually, use AI coloring tools to add randomized colors to my sketch, and adjust the coloring schemes to finalize the work.

Some of my artworks involve clear human figures, while others simply involve sharp shards piled up on one another. The clear human figures indicate the moments when I feel like a complete human being with all the identities perfectly aligned together. On the other hand, the inanimate pile of shards indicate the moments when I feel completely shattered and broken by the societal standards.

I decided to submit my art to the Queer in AI workshop at ICML 2021 because my artworks demonstrate how collaboration between a human and AI tools could produce a meaningful work that portray the struggles and also the triumphs of LGBTQ+ individuals.

Organizers

Arjun Subramonian (they/them)

Arjun is a brown queer, agender incoming PhD student at the University of California, Los Angeles. Their research focuses on graph representation learning, fairness, and ML ethics. They are a core organizer of Queer in AI, co-founded QWER Hacks, and teach machine learning and AI ethics at Title I schools in LA. They also love to run, hike, observe and document wildlife, and play the ukulele!

Sharvani Jha (she/her)

Sharvani is a fourth year undergraduate computer science student at the University of California, Los Angeles. Her interests include AI ethics + applications of computer science to space exploration. She is a co-founder of QWER Hacks, has led various initiatives (including AI Outreach) at ACM at UCLA, is the External Vice President of SWE at UCLA (and helps spearhead the organization’s lobbying initiatives), and is a software developer for UCLA ELFIN Cubesat.

Vishakha Agrawal (she/her)

Vishakha is a third year undergraduate Information Science student at Dayananda Sagar College (DSCE), India. She is interested in AI ethics, HCI, and software engineering research. She founded the first Women in Computing community at DSCE to bring research more formally to her college, is an organizer for Indian Women in Computing, and was instrumental in passing a global bill at the UN for rights of girls to study STEM everywhere.

Umut Pajaro (they/them)

Umut is a Bachelor in Communications Studies at the University of Cartagena (Colombia) and MA in Cultural Studies at National University of Rosario (Argentina). Their main research focus have been LGBTQI issues and Queer representation on media. In the last couple of years. being part of the Youth Special Interest Group from Internet Society (ISOC), they started to focus on gender diverse representation online and also on topics related Artificial Intelligence, Ethics and Social Computing.

MaryLena Bleile (she/her)

MaryLena is a second year Ph.D candidate at a joint program between UT Southwestern Medical Center and Southern Methodist University. She is a member of the Medical Artificial Intelligence and Automation lab, where her research includes Deep Reinforcement Learning and dynamical systems modelling for radiotherapy optimization. MaryLena has been a panelist for the UCLA ACM-AI group’s Queer in AI initiative, and has been a guest contributor for the LGBT STEM blog. Her background is in cello performance and she is passionate about overcoming false dichotomies including but not limited to the false dichotomy between art and science, as well as the one between the two traditional genders.

Michelle Julia Ng (any)

Michelle Julia is a Computer Science and History student at Stanford University. They are interested in the implications of AI adoption across industries; their current research revolves around the feasibility of Computer Vision in determining policies around vulnerable populations. At Stanford, they’re involved in curriculum building and teaching the CS+Social Good studio, and building tools for a more equitable cyber world through the Stanford Internet Observatory.

Call for Contributions

We will have a call for submissions to present at our workshop. The submissions must be generally related to the intersection of LGBTQIA+ representation and AI, or be research produced by LGBTQIA+ individuals. The submissions need not be directly related to the themes of the workshop, and they can be works in progress. Please refrain from including personally identifying information in your submission. No submissions will be desk-rejected.

We will open the call on Monday, April 5, 2021 and close it on Monday, June 14, 2021 Anywhere On Earth (AOE), with acceptance notifications going out on a rolling basis. Additionally, we are accepting submissions in any media, including---but not limited to---research papers, books, poetry, music, art, musings, tiktoks, testimonials. Submissions need NOT be in English. This is to maximize the inclusivity of our call for submissions and amplify non-traditional expressions of what it means to be Queer in AI. You can find excellent examples of “non-traditional” submissions here.

Furthermore, we are launching an undergraduate track for submissions. We encourage all undergraduates to submit their work. We will also publicize outstanding undergraduate research.

All individuals with accepted work will be granted free conference admission. All authors with accepted work will have FULL control over how their name appears in public listings of accepted submissions.

If you need help with your submission in the form of mentoring or advice, you can get in touch with us at queerinaiicml2021@gmail.com.

Submission link: https://cmt3.research.microsoft.com/QAIICML2021/ (while an "Abstract" is required, it need not be formal and can be a brief synopsis of your project)

Important Notes:

  • We highly encourage everyone to apply for the ICML Diversity and Inclusion Fellowship (Google Form or MS Form) for free conference registration!

  • We also highly encourage folks to apply for the ICML Participation Grant if you need financial help with registration, internet bandwidth, VPN, caretaking or accessibility costs.

  • Please email us at queerinaiicml2021@gmail.com if you're unable to get free conference registration via the above two methods