CRAFT Session Recap

Q: What are some instances of ageism in artificial intelligence?

  • One substantial problem with technology is the biases embedded in AI systems. For example, empirical research has demonstrated that sentences about older adults are scored more negatively than sentences about younger people in sentiment analysis that use AI. The implications of this are fairly widespread, as we use sentiment analysis tools in marketing, social media, and financial trading applications.

  • From a regulatory standpoint, we are ill-equipped to prevent or redress harms caused by biased systems. Updates to legal infrastructure protecting older workers against hiring discrimination have not kept pace with technology advancements that allow employers to target specific groups with job postings, or facial recognition tools that automate the hiring process.

  • Another issue is the use of AI and surveillance systems in care applications. Many care relationships have unbalanced power dynamics, and AI-powered technologies can limit older adults' autonomy. We must ask whether it is fair to ask people who are some of the least likely to have the knowledge and experience with data flows or algorithmic awareness to be among the most data-surveilled?

Q: Compare age bias with other areas of identity.

  • Age is distinct from these category-level traits. Age is experienced progressively, albeit differently by individuals.

  • One perspective is that age-based discrimination is not that different from the sort of harms based on membership in other social groups. Any system that divides people into groups introduces the potential for at least one party to be excluded from design considerations or data.

  • Age-based biases are likely embedded in how we build AI systems. It's rare to see an annotator over the age of 35. Due to the low pay, lack of benefits, and relatively difficult working conditions that characterize annotator occupations, these are not jobs that many older adults hold. Further, research has demonstrated that annotation platforms are relatively inaccessible for older adults with disabilities. It is unreasonable to rely on diversifying the annotator pool to solve all of the challenges associated with identity bias. Why not? First, a small subset of people working in data annotation is unlikely to represent all older adults’ diverse interests and experiences. Second, annotators—even older annotators—might have internalized stigmas against older adults. Surveys of the U.S. population reveal that even older adults have internalized negative ideas about aging. A pressing area for more research is developing cross-cultural understandings of how people perceive aging.

  • While there are areas where age overlaps with other identity axes, it's important to realize the role that material and economic factors have in undergirding oppression of marginalized and excluded identity groups. Researchers interested in how technology reproduces marginalization and exclusion in old age might look to the analyses of scholars like Ruha Benjamin to understand how AI is used in structurally ageist, racist, and classist systems.

Q: Building on the discussion of representation, how should we treat identity? One way that we understand identity is as a collection of individual attributes; as a community should we be thinking about identity differently?

  • From a technical perspective, working with individual-level attributes might be more helpful. Building models for specific use cases using individual identity characteristics will allow us to take existing measures and apply them to different contexts, rather than 'reinventing the wheel' in terms of how we think about identity in sociotechnical contexts.

  • When we think about identity and harm from a policy perspective, one runs into the issue of testing a model or application for harm across an ever-increasing number of axes of identity. This approach might foreground harm quantification, rather than prevention.

Q: What are the biggest age-related challenges in AI?

  • As an identity axis, age has not garnered the same attention as other identity categories. Thus, we have yet to develop benchmarks that enable us to evaluate whether or not we are doing a good job as we develop models in specific application areas.

  • In the policy arena, the biggest challenge is constructing civil rights laws that govern the technology’s limits. Current laws are not suitable to address the current constellation of technologies.

  • Critical data scholarship has drawn on critical race studies and gender studies to demonstrate a history of overlaying technologies onto contexts that are already discriminatory. From a critical gerontology perspective, we're overlaying technology into structurally ageist settings.

Q: What are some potential solutions to the challenges presented by ageism in AI?

  • Participatory and people-centered methods represent the first step in working with older adults through all stages of model building. We should strive towards more and better representation of older adults when constructing AI systems, along with collaboration and representation from people in other groups. We should take more intersectional approaches to AI technologies. Ideally, impacted communities will decide whether AI is the solution to their problems. We must give agency to the people impacted by some problem and the solutions we implement to solve them. We must think about how age intersects with other identities and building technology that addresses the needs of the most marginalized.

  • Although there is a lot of work to be done, the policy community has already taken steps towards regulation. Specifically, people are developing policies to regulate disability-related algorithms. While this represents a step forward in establishing regulatory capacities, it would be great to see the Equal Opportunity Commission and the Department of Justice develop regulation to prevent technologically-enabled, age-based discrimination. These areas of oversight don't represent long-term fixes, but they can act relatively quickly to create some barriers to harm.

  • We need to measure impact over intention as we look to develop AI in aging-related contexts. It is crucial to center the voices of grassroots activists.

  • We should not work towards a universal AI model. Instead, rigorously scoped models offer an opportunity to implement technology in areas where it can do good. We currently overlook weaknesses in some models because we can use them across a variety of contexts. This creates issues for model performance.

Breakout Room Summaries

In breakout rooms, panelists and the audience discussed issues related to ageism in AI, raising more questions than they answered.

  • Group 1

    • Age stands out as a continuous variable, as opposed to other, categorical variables. How do we decide to segment datasets by age? Should we segment datasets at all?

    • A similar problem is the way that we benchmark model performance. Where do we set the bar in terms of what is good or what is bad? What is age discrimination and what is designing for age-specific needs?

    • How do we treat (older) workers in the technology industry? From data annotators to software engineers, what are the factors that mediate the ways in which we value their labor?

  • Group 2

    • What issues are at stake when using AI in healthcare? Algorithms have the potential to deliver insights that improve patients’ quality of life, but their implementation introduces new vulnerabilities. For instance, how do we know whether an AI is making decisions with high-quality data? How can we judge a model’s accuracy before it is too late?

    • One issue regarding the use of AI in health-related contexts is discrimination. Differences in treatment regimes might be the result of historical discrimination against certain groups, but some groups believe these to be the most effective. How can we use AI in ways that are sensitive to past harms?