Ageism is understood as “discrimination or unjust treatment of older people based on stereotypes” (Reframing Age). To foreground structural ageism, we draw from Ayalon and Tesch-Römer’s (2018) definition as “the complex, often negative construction of old age, which takes place at the individual and the societal levels.” Ageism can be found in how problems that AI solutions target are defined, as well as in the interconnected invisibility and hypervisibility of older adults in AI practices.
Imagine a society where algorithms are used to disproportionately hire younger people over older people, where systems are less likely to accept a housing application from an older person with a disability, or where a person with gray or white hair is less likely to be recognized in an image detection algorithm. This sounds ageist, unfair, and absurd, yet has already happened (Terrell, 2019; Ajunwa, 2019; Vincent, 2021).
AI has the potential to be helpful, but also has potential risks. Researchers have started exploring these risks in the context of AI bias by gender, race, ethnicity, and disability status (e.g., Bennett et al., 2021). However, we extend an emerging body of work that describes how technology, more broadly, can also be biased against older people (Chu et al., 2022a; Chu et al., 2022b; Meade, 2019) by discussing how AI-powered technologies and systems can also be ageist. Recent work has shown how older adults (ages 60+) are not well-represented in large datasets that are often used as training data to AI systems (Park et al., 2021). Doing so invisibilizes the needs of an entire segment of the global population based on age.
Limited representation has carryover effects with other identity characteristics. As age and disability are correlated, lack of age representation could also mean lack of disability representation in datasets, algorithms, and systems. Similarly, older people with disabilities have experienced more age discrimination in housing contexts (Francis and Silvers, 2009). At the intersection of age and gender, older women have experienced more age discrimination in the workplace than younger women (Ellevate, 2020; Burn et al., 2019; Neumark et al., 2019). We urge researchers to consider ageism and how age intersects with other identities to better design more inclusive AI-powered systems that holistically represent identity, rather than focus on one identity characteristic at a time. To engage in this discussion, we will highlight how ageism and other forms of AI-based discrimination cannot be addressed at a single level or by siloed disciplines (Berridge, et al., 2021). Such a lack of representation increases the risk of amplifying ageism and other forms of bias offline in areas such as hiring, housing, health, finance, and analytics.
Scholarship on older adults and artificial intelligence has highlighted issues of age bias in technologies aimed at supporting older adults, such as technologies designed for care and in models that inform technologies. For example, AI-powered smart home technologies designed to address an ongoing caregiver shortage can become embroiled in deficit-based ageist dynamics, such as false assumptions about older adults’ ability to understand, consent to, or meaningfully refuse activity sensing and monitoring devices (Berridge, 2020). Further, researchers have shown how language models perceive aging-related terms for older adults to be more negative than positive (Díaz et al., 2021; Díaz et al., 2018). Additionally, older adults’ behaviors may be incorrectly interpreted by AI models. For example, Brewer et al., showed how older adults engage in less visible behaviors on social media, but that they find this type of engagement to be meaningful. Models that perceive this behavior to be negative or a form of “lurking” may try to inappropriately encourage more visible interactions on social platforms (Brewer et al., 2021).
In technologies designed for general use, ageism is reflected and reproduced in data sources and data collection processes underlying a range of AI applications. It is documented that older adults are disproportionately underrepresented on many of the Internet platforms typically scraped for training and testing data (Park et al., 2021; Stypinska, 2021). Similarly, there are representation challenges within annotation. Prior work shows how older adults are underrepresented among data annotators (Difallah et al., 2018). And, researchers have identified differences in sentiment annotations provided by older Black Americans compared with older white Americans (Prabhakaran et al., 2021). However, age representation in specific datasets is relatively rarely documented, and it tends to be documented very generally and inconsistently.
In the panel, we will discuss (1) how lack of representation and misrepresentation in data, algorithms, and annotation practices are crucial to understanding ageism in AI and (2) how aging intersects with other identities that have also been discussed in FAccT literature. As such, we bring together a group of experts across disciplines to share their experiences with ageism in AI through their work. Specifically, we will engage industry, academic, and non-profit experts in aging, accessibility, computing, and policy to ask questions about fair and equitable AI experiences that better represent older adulthood, and how to do so in meaningful ways. We will apply critical gerontology and feminist approaches to also ask panelists how to make older adulthood more visible in our socio-technical systems and in what contexts hypervisibility poses risks. Lastly, we will invite attendees to contribute to this discussion and engage with central questions of AI and ageism.