Post date: May 14, 2021 2:5:11 PM
Can humane algorithms foster inclusive ageing?
Call for proposals: https://www.ukri.org/opportunity/research-into-inclusive-ageing/
Participation in society today depends in part on access to, and facility with, computers and the internet. We are effectively forcing older people to engage with technology in ways that many find difficult and for which they have never been trained. Essential interactions in banking, surgeries, council offices, and even restaurants and social gatherings are mediated online or via automated telephone systems. Essentially all job applications must be done online today, which can unfairly exclude many older people. The phenomenon is broadly known as the "Digital Divide", and its severity appears to be worsening. Vast cost-efficiency imbalances seen by service providers in the commercial sector and across government agencies will demand more and more reliance on computer-mediated solutions.
Even the algorithms used to determine medical care may produce age-discriminatory outcomes. Unless software is regulated as a "medical device", few restrictions apply. Medical calculators, like almost all software, are often not based on very much data, and the data they do have is not augmented by recently available data. Even the very measures used in evaluating the actual or expected outcomes in terms of QALY or DALY may themselves embody a form of ageism.
Humane algorithms are computer software that anticipates and serves the needs and capacities of its users. It helps the humans who interact with it to understand outputs and outcomes effected by the algorithms and complements human skills to improve those outcomes. In essence, humane algorithms are computer algorithms that work well with humans. Humane algorithms refer to computer software that, insofar as is possible, [this list should be shorter]
Creates outcomes and situations that humans would judge as equitable or fair,
Recognises, accepts and accommodates diversity among users,
Is transparent, or at least interrogatable, about its internal functioning,
Checks human inputs for errors and misconceptions,
Does not require humans to respond to queries precisely, immediately, or at all,
Protects privacy by securing or progressively anonymising personal information,
Handles errors and unusual conditions workably,
Does not unnecessarily burden humans,
Accepts control from humans, or relinquishes it back to them, in ways that humans find workable,
Accounts for the needs of marginalised groups in assessments and decision making.
Research and development in humane algorithms is becoming more important in light of recent, shocking demonstrations of racial and gender bias in computerised decision making. It builds on computer accessibility rules, such as closed-captioning for instance, and assistive technologies designed to make the use of computers less challenging for those with disabilities.
Perfectly unbiased algorithms may be unachievable, but there should at least be oversight of the ethical values encoded in the algorithms. Currently, there are no standards for such review with respect to ageism and the humane consideration of the needs of older people.
The research needed would address these questions:
What is the variation in computer literacy and internet access among older people, and across subgroups of older people?
How, ethnographically, does comfort and facility with--and trust in--human-computer interactions vary with age and subgroup?
How can unconscious bias in programmers, policy makers, and societal structure adversely affect older people?
Are there interactions between ageism and other kinds of discrimination based on gender, race, or ethnic or sexual identity?
How can we detect ageism encoded in computer software and the structures that deploy and use such software?
How can we measure the discriminatory consequences of encoded ageism?
How do older people experience ageism in their interactions with software?
Can algorithms be designed that are more accessible, workable, transparent and fair with respect to older people?
Preliminary research suggests broad unconscious ageism is encoded in algorithms, but the severity and gravity of its consequences are not well known. Stereotypes suggests older people are bad with computers, and evidence says they have less access, although this is highly variable across classes and subgroups.
The project addresses considerable problems with accessibility of relevant demographic data. Demonstrating that ageism exists and is important requires subtle statistical analyses, which are subject to Simpson paradox, "uncertainty bias" which arises against individuals in small groups (such as older people in minority subclasses), and a host of confounding issues that can be difficult to account for. Effective algorithm design strategies depend on being able to detect and measure encoded ageism.
The team should include researchers in
social science, demography (Derrer-Merk, Silva),
health economics, health equality (Giebel),
computer science (Gąsieniec), and
risk analysis, standards development (Wimbush, Ferson, Gray).