Rajitha Ramanayake

Researcher | Traveler | Learner

Publications

A Small Set of Ethical Challenges For Elder-care Robots

R Ramanayake, V Nallur - ROBOPHILOSOPHY CONFERENCE 2022 (Accepted, In press)

If interested request a copy of the pre-print by contacting authors directly through email.

Abstract:

Elder-care robots have been suggested as a solution for the rising elder-care needs. Although many elder-care agents are commercially available, there are concerns about the behaviour of these robots in ethically charged situations. However, we do not find any evidence of ethical reasoning abilities in commercial offerings. Assuming that this is due to the lack of agreed-upon standards, we offer a categorization of elder-care robots, and ethical ‘whetstones’ for them to hone their abilities.

Cite: Ramanayake, R., Nallur, V. Pro-Social Rule Breaking as a Benchmark of Ethical Intelligence in Socio-Technical Systems. DISO 1, 2 (2022). https://doi.org/10.1007/s44206-022-00001-7

Abstract:
The current mainstream approaches to ethical intelligence in modern socio-technical systems have weaknesses. This paper argues that implementing and validating pro-social rule breaking behaviour can be used as a mechanism to overcome these weaknesses and introduce a sample scenario that can be used to validate this behaviour.

Cite: Ramanayake, R., Wicke, P. & Nallur, V. Immune moral models? Pro-social rule breaking as a moral enhancement approach for ethical AI. AI & Soc (2022). https://doi.org/10.1007/s00146-022-01478-z

Abstract:
We are moving towards a future where Artificial Intelligence (AI) based agents make many decisions on behalf of humans. From healthcare decision-making to social media censoring, these agents face problems, and make decisions with ethical and societal implications. Ethical behaviour is a critical characteristic that we would like in a human-centric AI. A common observation in human-centric industries, like the service industry and healthcare, is that their professionals tend to break rules, if necessary, for pro-social reasons. This behaviour among humans is defined as pro-social rule breaking. To make AI agents more human-centric, we argue that there is a need for a mechanism that helps AI agents identify when to break rules set by their designers. To understand when AI agents need to break rules, we examine the conditions under which humans break rules for pro-social reasons. In this paper, we present a study that introduces a ‘vaccination strategy dilemma’ to human participants and analyzes their response. In this dilemma, one needs to decide whether they would distribute COVID-19 vaccines only to members of a high-risk group (follow the enforced rule) or, in selected cases, administer the vaccine to a few social influencers (break the rule), which might yield an overall greater benefit to society. The results of the empirical study suggest a relationship between stakeholder utilities and pro-social rule breaking (PSRB), which neither deontological nor utilitarian ethics completely explain. Finally, the paper discusses the design characteristics of an ethical agent capable of PSRB and the future research directions on PSRB in the AI realm. We hope that this will inform the design of future AI agents, and their decision-making behaviour.

R Ramanayake, V Nallur - Computational Machine Ethics 2021 Workshop at KR 2021

Cite: Ramanayake, Rajitha, & Nallur, Vivek. (2021). A Computational Architecture for a Pro-Social Rule Bending Agent. First International Workshop on Computational Machine Ethics held in conjunction with 18th International Conference on Principles of Knowledge Representation and Reasoning KR 2021 (CME2021), Online.

Abstract:
There have been many attempts to implement ethical reasoning in artificial agents. The principal philosophical approaches attempted have been mainly deontological or utilitarian. Virtue ethics has been discussed but not thoroughly explored in implementations of ethical agents. A particularity of strict implementations of deontological/utilitarian approaches is that the results produced by these do not always conform to human intuitions of “the right thing to do”. Intuitions of the right thing to do in a particular social context are often independent of which philosophical school of thought human beings relate to. This is partly due to the ability of humans to step outside their particular reasoning framework, and make virtuous decisions based on what would be beneficial to society, not just themselves. This behaviour is called pro-social rule bending. This is a work-in-progress paper that details our attempt to implement pro-social rule bending in an artificial agent.