Part I: We don’t need labels!
[Decolonising concepts]
Gender- sex distinction: Gender identity is more clearly related to social norms, expectations etc. and it is constantly changing in its content. Gender and sex are not often addressed appropriately in research and AI.
Gender identity: The personal sense of one’s own gender based on self-determination. We need to ensure that less visible gender identities are not taken into consideration in AI models. It is of great importance for a multitude of identities to be incorporated in building AI models. Since most AI models use a binary encoding of gender, this can harm non-binary people.
Queerness: Everything that is beside or beyond normativity or narrativity of normal regarding gender identity or identity of sexuality. An umbrella term that constantly involves more non-binary non-predatory categories not homogenised. In the AI era, technology queers the human experience and future positive narrative.
It is an enough general term to keep different identities together (e.g. trans in Brazil, intersex and polyamorous people in Africa are represented), so that they can maintain their identities, gathering the difference of LGBTQIA+ in one word.
Homonationalism: A systematic oppression of queer, racialised, and sexualised groups in an attempt to support neoliberal structures and ideals by homogenising gender identity categories, without leaving the space for self-determination! It simplifies North and South and not West–Eastern. It is related to white patriarchy, which oppresses the richness of language and influences vocabulary.
Decolonisation: The undoing of colonialism, the latter being the process whereby imperial nations establish and dominate foreign territories, often overseas. Western cultures cannot see the imperialistic part (example of Argentina).
Heterogeneity: Gender diversity, visibility, gender fluidity.
Neoliberalism: An invisible hand that makes everything possible. In neoliberal mindsets, the needs of the people are regulated by themselves.
Oppression: AI systems multiple historically and systematically these forms of oppression. It is related to searching for equity.
Intersectionality: It helps us ensure that characteristics/attributes related to various identities based on race, class, age etc. are included in the data. Without intersectionality, our understanding of biases in AI is single-mirrored.
Ethics washing: It is the practice of fabricating or exaggerating a company’s interest in equitable AI systems that work for everyone. A textbook example of tech giants is when a company promotes “AI for good” initiatives with one hand while selling surveillance capitalism tech to governments and corporate customers with the other.
Tokenism: The practice of making only a perfunctory or symbolic effort to do a particular thing, especially for instance by recruiting a small number of people from mis/underrepresented groups in order to give the appearance of sexual or racial equality within a workforce or in policy.
Privilege: An advantage, or immunity granted or available only to a particular person or group. Privilege goes together with power. However, it can also be a tool for algorithm designers to make a difference in the impact of their software upon a particular group of people.
Inclusive language: Inclusive language should take into account gender and demographic characteristics. An inclusive language is highly needed in Natural Language Processes (NLP).
Regarding the practical dimension, spoken language and inclusive languages are different which can make it difficult for algorithms to incorporate them. Usually, algorithms are trained in English languages, which is discriminatory against other languages and other inclusive languages.
Part II: Name them to make them visible!
[GBV concepts]
Gender-based violence (gbv) in AI: A form of power –e.g. gender stereotypes- or the violence of the society that is crystallised and replicated in the AI design. It could be abusive patterns in AI systems, metrics of what defines attractiveness, or no freedom and visibility to other gender identities.
(e.g. metrics in TikTok, in machine learning, gender under/misrepresentation in AI workforce, deepfakes, sexist hate speech and content moderation).
Honour and Shame: It is a flip side of how femininity is conceptualised as property. Honour-related prosecution, rape or forced marriage for non-binary people. They can be found in historical data based on law.
Trust: It is both a sociological term (based on social relationships), and an emotional term (existing among people). In the AI era, it exists between people and technologies which promote this feeling. However, we need to build trust within communities and institutions based on transparency and accountability.
Trauma: Learning or growing on collective trauma and reproducing it, harmful stereotypes (clinical term e.g. racist or Jewish trauma--> horrible trauma) connect the future to the past. AI doesn’t know this difference, it learns from patterns (e.g. promoting of body shaming comments, beauty filters and objectification of female bodies, content-based AI relying heavily on content moderation and disabled people). The trauma comes from society.
Toxic masculinity: Written text moderation of the technological systems. It depends on the moderator to see problematic issues.
Manipulation: AI systems shaping identities and what we should feel about them.
Normalisation: The situation when women/femininities are taught to minimise the pain and accept forms of gender-based violence as normal or to be stigmatised.
Sexism in AI: Any tool or application of AI reproducing and reinforcing stereotypical ideas about other people on the basis of their gender identity and/or sexual orientation and aiming to minimise them on this basis or underestimate/humiliate etc. Sexism creates models that reinforce, replicate and sustain gender stereotypes or use data that is not representative of the needs and lives of all genders.
Discrimination in AI: Discrimination feeding into AI through designs replicates all of our analogue assumptions (e.g. when an AI system makes gender assumptions of facial recognition systems).
Gender bias: It is related to mis/underrepresenting femininities, non-binary and transgender persons in functionalities of a product. In the context of AI systems, gender bias is assuming one's future behaviours based on their gender. Biases could come up at different stages of algorithm building (e.g.intention, coding, implementation, maintenance stage). A typology of biases is often necessary to avoid gender biases.
Internalised misogyny: A form of sexism that is based on sexist behaviour and attitudes enacted by femininities toward themselves or other femininities. Internalised sexism is a form of internalised oppression.
Restorative justice: It is a more local-based and community term about convening someone who has committed an offence impacted person/s of the community. It could be based on connectivity with which we can share more experiences and create safe spaces for communication and empathy.
Part III: Build the tools to envision an AI feminist future!
[feminist ethics, principles & approaches]
Waves of feminism: Feminism is not just about cis women (TERF movement) but all femininities. Same thing with the opposition between white and black femininities in some feminist waves, as well as with sex workers.
Depatriarchise AI: We should develop AI to address this problem of society or to prevent us from this! Set clear borders of what is toxic!
Systemic inequality: Institutionally created and reinforced privilege for some groups of people and a lack of privilege and access to resources by others.
Femme tech: Technology that is against systems of oppression, colonialism, racism, sexism, and exploitation of the natural world. It is the technology built that confronts and stops that and brings all this kind of normative queer, normative anti-patriarchal and anti-capitalist technology.
Sustainability: It is reflected in how we can build a future, supply chain of AI, and exploitation of physical and labour resources. It is central whether we want to use this technology without assessing the environmental impact. It is a matter of intersectional feminism!
Inclusivity: Being open to including more gender identities whenever necessary in order to ensure no one is excluded. For example, teams that design/manage/use AI should be inclusive, not in the sense of tokenism or ethics washing.
Sisterhood/solidarity: How could the feminist community react in a case of gender bias? For instance, if an intimate photo is publicly shared online -e.g. sisterhood Apps--> it needs to be easier to speak up, to report violence. It is linked with intersectionality and the black movement. Technology in some cases fosters sisterhood because it is envisioned to support brotherhood. At least, we want an opportunity to create an equitable AI.
Abolitionism: A society based on radical freedom, mutual accountability, and passionate reciprocity. Abolitionism in AI could be envisioned in predictive policing. We need to set very strong red lines and increase the scope of policing away from kind of requiring some forms of evidence.
Care: A highly commercialised and industrialised term. It meant in different ways. For instance, how we understand self-care is not in a positive way since it is strongly based on capitalism.
Radical community: A community in AI that breaks the individual into so many parts. It is associated with collective redress, intersectionality, and a more representative audience.
Vulnerability: A concept that can be used ONLY in the context of self-determination, because it takes out capacity and implies the need for help of somebody. Many prefer the term “exclusion” instead, meaning the process in which some groups of people are blocked from (or denied full access to) various rights, opportunities and resources that are normally available to members of a different group. “Exclusion” is an active and not a pathetic word, shedding light in this way into the social action that SOMEONE (society, government, a company) excludes these people and as a result they become vulnerable.
However, it signals the power dynamics and that there is a group that is harmed, flagging the need for empowerment and a human-centred approach.
Collective approach: It is related to community rights and based on the community approach of how technology benefits or affects us. However, we need to leave equal space for an individualistic approach, because in different contexts is changing. Strategic litigation is a key element of the collective approach, but it needs to be accompanied by a different way of pre-emptive risk assessment. It is the tool to tackle oppression and mass surveillance and reconstruct power dynamics and imbalances. An important barrier to this is the opacity of AI systems. Collective mechanisms should be regulated in law.
Burden of proof & platform’s accountability: These terms are related to raising awareness of how AI works. We need to build an ecosystem of guarantees--> process with a checkpoint from transportation--> not pose a danger/threat to society.
Pronouns: Individuals should be given the possibility to specify their pronouns when interacting with other persons or with algorithms. For instance, pronouns are often written in bios on digital social networks (DSN). This can help moderators to detect waves of harassment against LGBTQIA+ people on DSN's and put some LGBTQIA+ content back in their context.
Inclusive design: It is a value-driven design that takes into account inclusivity, representation and accessibility. It tests AI products to serve all humans and promotes wellness and not merely usage-driven incentives. This means inclusivity both in the workforce that puts together AI applications and in the design itself e.g. accessible interfaces.
Sexual freedom: Being able to express one's sexuality within a consensual setting. Being able to openly discuss sex and sexuality without repercussions, physical, symbolic or societal. Not viewing certain bodies as impure, or provocative (e.g. social media platforms’ policy regarding nudity which tends to discriminate against women or sex workers more than any other categories).
Reproductive rights: It is of great importance in the issue of big data collected through menstrual cycle apps as well as the policing of gendered individuals because of data leaks/data controlled by the state.
Freedom of thought: It is related to “no profiling”, “no ad targeting”, elimination of biases in predictive models, objectification of female bodies and the way they are sexualised based on the male gaze (a sexualized way of portraying females by heterosexual males), stronger consumer protection and freedom of speech.
Part IV: Towards feminism in data
[Data protection principles from the spectrum of gender]
Transparency: It helps users understand how an algorithm makes a decision. It is related to the access to information/resources/explaining the reason behind a decision. Transparency is not just about publishing some data reports, but also a process and vigilance for designers for all the stages in building an algorithm. For instance, it is about listening to the people experiencing the software, feedback, etc.
Black-box effect: In computing, a “black box” is a device, system or program that allows users to see the input and output, but gives no view of the processes and workings between. The AI black box, very simply, refers to the fact that with most AI-based tools, we do not know how they do what they do.
Data minimisation: It is a principle that demands a collection of data to what is directly relevant and necessary to accomplish a specified purpose. It demands also providing clear and specific information on what data is selected, why and how it will be used.
Data gap: It refers to the absence of appropriate data and/or the lack of relevance and quality in the necessary data. It is of great importance in particular types of bias, like "omitted variable data" and "already biased data".
Consent: It should be used merely for what is given and not for what is the outcome of this information. For instance, a user may give consent for someone to know their gender, but this does not mean that they give consent to receive differentiated services based on this. One cannot expect to read all “Terms and Conditions” for all services humans use. This has to do with treating users not as objects but as humans.
Explainable AI: Explainable AI (XAI), or Interpretable AI, is an AI in which the results of the solution can be understood by humans. XAI algorithms are considered to follow the three principles of transparency, interpretability and explainability.
Accountability: It clearly defines the role of every actor in the AI value chain (accountability over the entire life cycle of AI systems). It is of great importance to set control mechanisms. For an AI system to be accountable, one should define who is to be blamed in case of a wrong use of it or a potential malfunction.
Data sovereignty: It refers to the idea that a country or jurisdiction has the authority and right to govern and control the data generated within its borders. This means that the government has the power to regulate the collection, storage, processing, and distribution of data that originates within its territory.
Data sharing Vs. data protection: This dilemma concerns on the one hand the need for users to be informed and aware of the consequences of data sharing, while on the other hand, they can use data sharing as a tool to share their experiences, raise awareness on these issues, empower themselves and the community, feel proud of being themselves!
However, there is peer pressure to share these stories, more traumatising in some cases, while there is the risk of being watched and under surveillance.
Open source and alternative privacy models could be a solution.
Data activism: Being mindful of how our data is used. We need to give more human agency. It is based on the theory of change and the secret feminine. It was introduced by Stefania Milan. It requires two types of actions: 1. reactive actions, anonymity, data protection, and 2. proactive --> give visibility to marginalised people, black data, data abortion, different approach--> availability of data.
It could be empowering, however, there is the problem of individualism, or a call out, while collaborative approaches are needed.