R Ramanayake, V Nallur - European Conference on Multi-Agent Systems, 2024 (Accepted)
Cite:
Abstract:
The common consensus is that robots designed to work alongside or serve humans must adhere to the ethical standards of their operational environment. To achieve this, several methods based on established ethical theories have been suggested. Nonetheless, numerous empirical studies show that the ethical requirements of the real world are very diverse and can change rapidly from region to region. This eliminates the idea of a universal robot that can fit into any ethical context. However, creating customised robots for each deployment, using existing techniques is challenging. This paper presents a way to overcome this challenge by introducing a virtue ethics inspired computational method that enables character-based tuning of robots to accommodate the specific ethical needs of an environment. Using a simulated elder-care environment, we illustrate how tuning can be used to change the behaviour of a robot that interacts with an elderly resident in an ambient-assisted environment. Further, we assess the robot's responses by consulting ethicists to identify potential shortcomings.
Cite: Ramanayake, Rajitha, and Vivek Nallur. Implementing Pro-social Rule Bending in an Elder-care Robot Environment. International Conference on Social Robotics. Singapore : Springer Nature Singapore, 2023.
Abstract:
Many ethical issues arise when robots are introduced into elder-care settings. When ethically charged situations occur, robots ought to be able to handle them appropriately. Some experimental approaches use (top-down) moral generalist approaches, like Deontology and Utilitarianism, to implement ethical decision-making. Others have advocated the use of bottom-up approaches, such as learning algorithms, to learn ethical patterns from human behaviour. Both approaches have their shortcomings when it comes to real-world implementations. Human beings have been observed to use a hybrid form of ethical reasoning called Pro-Social Rule Bending, where top-down rules and constraints broadly apply, but in particular situations, certain rules are temporarily bent. This paper reports on implementing such a hybrid ethical reasoning approach in elder-care robots. We show through simulation studies that it leads to better upholding of human values such as autonomy, whilst not sacrificing beneficence.
Cite: Ramanayake, Rajitha, and Vivek Nallur. A Small Set of Ethical Challenges for Elder-Care Robots. Frontiers in Artificial Intelligence and Applications, IOS Press, 2023. 70-79.
Abstract:
Elder-care robots have been suggested as a solution for the rising elder-care needs. Although many elder-care agents are commercially available, there are concerns about the behaviour of these robots in ethically charged situations. However, we do not find any evidence of ethical reasoning abilities in commercial offerings. Assuming that this is due to the lack of agreed-upon standards, we offer a categorization of elder-care robots, and ethical ‘whetstones’ for them to hone their abilities.
Cite: Ramanayake, R., Nallur, V. Pro-Social Rule Breaking as a Benchmark of Ethical Intelligence in Socio-Technical Systems. DISO 1, 2 (2022). https://doi.org/10.1007/s44206-022-00001-7
Abstract:
The current mainstream approaches to ethical intelligence in modern socio-technical systems have weaknesses. This paper argues that implementing and validating pro-social rule breaking behaviour can be used as a mechanism to overcome these weaknesses and introduce a sample scenario that can be used to validate this behaviour.
Cite: Ramanayake, R., Wicke, P. & Nallur, V. Immune moral models? Pro-social rule breaking as a moral enhancement approach for ethical AI. AI & Soc (2022). https://doi.org/10.1007/s00146-022-01478-z
Abstract:
We are moving towards a future where Artificial Intelligence (AI) based agents make many decisions on behalf of humans. From healthcare decision-making to social media censoring, these agents face problems, and make decisions with ethical and societal implications. Ethical behaviour is a critical characteristic that we would like in a human-centric AI. A common observation in human-centric industries, like the service industry and healthcare, is that their professionals tend to break rules, if necessary, for pro-social reasons. This behaviour among humans is defined as pro-social rule breaking. To make AI agents more human-centric, we argue that there is a need for a mechanism that helps AI agents identify when to break rules set by their designers. To understand when AI agents need to break rules, we examine the conditions under which humans break rules for pro-social reasons. In this paper, we present a study that introduces a ‘vaccination strategy dilemma’ to human participants and analyzes their response. In this dilemma, one needs to decide whether they would distribute COVID-19 vaccines only to members of a high-risk group (follow the enforced rule) or, in selected cases, administer the vaccine to a few social influencers (break the rule), which might yield an overall greater benefit to society. The results of the empirical study suggest a relationship between stakeholder utilities and pro-social rule breaking (PSRB), which neither deontological nor utilitarian ethics completely explain. Finally, the paper discusses the design characteristics of an ethical agent capable of PSRB and the future research directions on PSRB in the AI realm. We hope that this will inform the design of future AI agents, and their decision-making behaviour.
Cite: Ramanayake, Rajitha, & Nallur, Vivek. (2021). A Computational Architecture for a Pro-Social Rule Bending Agent. First International Workshop on Computational Machine Ethics held in conjunction with 18th International Conference on Principles of Knowledge Representation and Reasoning KR 2021 (CME2021), Online.
Abstract:
There have been many attempts to implement ethical reasoning in artificial agents. The principal philosophical approaches attempted have been mainly deontological or utilitarian. Virtue ethics has been discussed but not thoroughly explored in implementations of ethical agents. A particularity of strict implementations of deontological/utilitarian approaches is that the results produced by these do not always conform to human intuitions of “the right thing to do”. Intuitions of the right thing to do in a particular social context are often independent of which philosophical school of thought human beings relate to. This is partly due to the ability of humans to step outside their particular reasoning framework, and make virtuous decisions based on what would be beneficial to society, not just themselves. This behaviour is called pro-social rule bending. This is a work-in-progress paper that details our attempt to implement pro-social rule bending in an artificial agent.