Gender Identity and AI@AAAI2021

Program:
february 8, 2021

8:30 PST: Workshop begins

8:30 – 9:30 Welcome; Intro to Gender Identity and AI (Andreea Danielescu, Charlie Negri, Anita de Waard); Q&A

9:3010:15 Keynote 1: Morgan Klaus Scheuerman: "How We Teach Computer Vision To See Race and Gender"
Race and gender have long sociopolitical histories of classification in technical infrastructures-from the passport to social media. Facial analysis technologies are particularly pertinent to understanding how identity is operationalized in new technical systems. This talk will cover two studies on gender and race representations in facial analysis technology. First, I sought to understand how gender is concretely conceptualized and encoded into commercial facial analysis and image labeling technologies available today. Findings show how gender is codified as a binary into both classifiers and data standards. Second, I examined how race and gender are defined and annotated in image databases used for training and evaluating models. Though race and gender decisions are rarely justified or defined, they are discussed as apolitical, obvious, and neutral. These two studies show not only how race and gender are represented in facial analysis models, but also the value-decisions made at the data level and how inclusion and exclusion of certain identities are propagated through the model pipeline.

10:15 – 10:30 Q&A

10:30 – 11:00 Break

11:00 – 11:45 Keynote 2: Sharone Horowit-Hendler "Gendering Conversational AI and the Nonbinary Option"
As voice assistants become more prevalent, designers must intentionally design their bot personality, as the user will attribute a personality to the bot regardless of whether one was designed. Female presenting voice assistants are frequently used, further reinforcing gender stereotypes by portraying them in submissive roles. Meanwhile, male voices are used to give a bot authority. One way to break down these stereotypes, as well as promote inclusivity, is to create a nonbinary text-to-speech voices for use in voice assistants. What, then, does a nonbinary voice sounds like? While focus has primarily been on the frequency of the voice, gender is also perceived based on other features, including context, word choice, intonation, and more. The current existing non-binary TTS voice included feedback from the nonbinary community during the voice design and development process.

11:45 – 12:00 Q&A

12:00 – 12:30 Panel Discussion (Keynote speakers and organisers)

12:30 PM PST Workshop close

NB All Times are given in Pacific Standard Time

Keynote speakers:

Morgan Klaus Scheuerman
Human characteristics are increasingly encoded into machine learning (ML) algorithms; into the datasets used to train and evaluate them, into the tasks they are trained to complete, and into the infrastructure of the algorithms themselves. A particularly salient example of algorithmic identity is computer vision (CV) technologies trained to conduct facial analysis (FA): image labeling, facial detection, facial recognition (one-to-one face matching). Morgan researches historically marginalized identities through the “eyes” of computer vision models. His work focuses on the intersection of two perspectives: (1) the technical perspective, encompassing the processes and data which enable machine learning development; and (2) the socio-historical perspective, the underlying philosophy and theory about what make up social identities. His research agenda focuses on understanding where in the pipeline social identity is embedded and how algorithmic identity representation is understood and experienced by human and technical actors.

Sharone Horowit-Hendler

Sharone Horowit-Hendler has a PHD in linguistic anthropology with an emphasis on gender studies. Their dissertation, Navigating the Binary, is a study of gender presentation in the nonbinary community. They also worked with Accenture to design Sam, a fully comprehensive nonbinary text to speech voice.

Objectives

As AI-driven systems become the de-facto drivers of human interactions, there is a risk of alienating or even hurting groups of people who do not see themselves represented in the underlying societal assumptions regarding gender identity made by these systems. AI systems can also propagate harmful stereotypes for individuals that are represented due to the underlying assumptions built into their design. These stereotypical gender assumptions drive the development, curation, selection and annotation of software and data, developing systems that impact humans of all stripes around the globe. There are several examples of how underlying gender biases baked into AI systems affects a broad range of individuals.

One example is the use of AI which can validate whether someone belongs in a space. In the social app giggle, which was supposed to be a safe space for ALL women and girls, a selfie is used to verify the gender of a user. These and other attempts to computationally assign gender identity are troublesome, at best. Not only does this affect trans women, it affects cis women who don’t fit mainstream expectations of gender presentation. In another example, non-binary and trans people are often forced to gender themselves in non-relevant situations, such as to take a test, use public wifi, or make a reservation at a restaurant. Far too often the only choices are male/female, which causes dysphoria and requires people to misgender themselves in order to use that technology. Finally, voice assistants are primarily female presenting, which reinforces the stereotype of women as friendly subservient helpers.

To move forward in a way that acknowledges and respects a broader scope of gender identities, change is needed in the systems that drive our world. Our aim is to gather a group of AI researchers and gender identity non-cis/queer community members, allies, peers, and folks who can drive change in AI systems through activism, programming or legislative means and address what change is needed, and how it can best be achieved.

The key question we hope to explore during this workshop is: “How can AI systems be developed that better represent and include all gender identities?

We are interested in discussing the full AI/ML pipeline, including:

  1. The data that is collected. This could include asking for information that isn't relevant or necessary or not ensuring diversity in the way we build our data sets

  2. The way in which the data is analyzed and interpreted. Are we building some biases into our models, independent of the data?

  3. The representations of our AI systems/agents and how they may propagate stereotypes.

Target audience:

This workshop is intended for anyone with an interest in the topic, including, but not limited to:

  • Data scientists

  • AI researchers

  • Queer, trans and nonbinary activists

  • Social scientists interested in studying the ethical principles of AI

  • Librarians, information scientists and curators with an interest in gender identity and data

  • Legislators and policy makers who have the capacity to influence governance and regulations with respect to AI

REGISTRATION AND PARTICIPATION:

We welcome all members of the AI community who are interested in this topic to participate as follows:

Thanks to generous donations from the Elsevier Pride Foundation and the Accenture Pride ERG we will make available a number of free registrations to AAAI and this workshop for folks who are interested in the topic of gender identity and AI for professional or personal reasons but cannot afford to join.

Please send an email to the Organizing Committee to describe your interest, and we will contact you to see if we can support your attendance.

Organizers

Anita de Waard

(she, her)

Elsevier

a.dewaard@elsevier.com

Bharathi Raja Chakravarthi

(he, him)

Insight SFI Research Centre for Data Analytics, Data Science Institute, National University of Ireland Galway

Charlie Negri

(they/them)

NORCE

Sharone Horowit-Hendler

(they, them)

Andreea Danielescu

(she, her)

Accenture Labs

San Francisco, CA

andreea.danielescu@accenture.com

Sam Vente

(they, them)

De Belastingdienst

savente93@gmail.com

WORKSHOP DATE: FEbruary 8, 2021