Eco-socio-botics 2022: Social Robotics for sustainability
Workshop, 16th December 2022
organized by
Ilaria Alfieri, Antonio Fleres, Luisa Damiano
A satellite workshop of the International Conference of Social Robotics 2022 (ICSR2022), 13-16 December 2022
Talk Details:
Trust the Expert, not the AI: Agency, Reliability, and Trustworthiness of Autonomous Robotic SystemsEmily Larosa
University of California, San Diego, U.S.
Abstract: As ‘carebots’ become more of a reality in our healthcare landscape, how do we ensure that patients trust the right agents in their health journey? Cultivating appropriate and healthful trust dynamics between the next generation of social robots and humans is vital for sustainable system deployment. As developers move forward with the creation and deployment of robotic healthcare systems, we must address the real, rather than imagined, relationships which underpin their successful deployment in patient care scenarios. Physical manifestations of Healthcare AI (HCAI) stand to confuse the patient, as well as the deployer, of the role of a patient’s agency in their car. Principles developed when attempting to ensure socially sustainable, ethical deployment of an HCAI system ought to include (i) trust articulation between deployers and the appropriate agents; and (ii) iterative evaluation measures of the social dynamics that come into play when we rely on such systems for ‘best’ healthcare practices. When looking at the future of robotics through the lens of artificial intelligence, agency, and societal trust, complex issues emerge. What this paper is concerned with is not background AI systems, but rather autonomous healthcare systems which are dynamic and physically present: systems which interact with their environment in ways which then influence their next decision. From telepathology cutting down lab-turnarounds, to AI-driven passive monitoring allowing the elderly to age in-place, we see AI pervading the healthcare system at a breakneck pace. All Healthcare AI (HCAI), like any expert utonomous system, has a myriad of expertise and trial and error grounding its behaviors. Unlike other AIs, HCAI are unique in that a robotic physical presence is often a desirable and required component: the the Vulcan Project, the intelligent wheelchair system currently undergoing design trials. The goal of the Vulcan is to (i) develop and evaluate an intelligent robot that is of genuine use to the human it aids; and (ii) to have a system that can communicate naturally with its deployer about the shared task of getting to their desired destination. This project integrates two systems: an intelligent robotic wheelchair, and a telepresence that transmits perceptions to the human driver so robot and human can collaboratively overcome obstacles.HCAI systems are often hoped to act as stand-ins for healthcare experts, moving into decision-making recommendations and care provision roles in the years to come. The Vulcan underscores both how HCAI robotics truly benefit those whose quality of life is affected by their need for care, as well as how a patient assuming the full agency of a system may damage long-term success. Within the scope of systems, autonomy refers to the ability for the system to act without external overridingirection, and its ability to react appropriately to external stimuli. HCAI autonomy differs dramatically from an individual’s agential autonomy, as agential autonomy is following one’s own desires and acting accordingly. Autonomous systems do not themselves have desires; therefore, the systems cannot have agential autonomy, and cannot be trusted alone. If agential trust is misplaced, and the system fails, we may see a disuse of the technology before its maturation. What types of trust ought to be fostered in patients in the care of such a system is critical to determine prior to deployment as well as evaluate iteratively for sustained success. A distinct challenge emerges: when using such an autonomous system in care decisions, how can we ensure that the deployer is trusted as an expert by other agents in the decision-making process? Rather than seeing trust and reliance as an either-or scenario, trust and reliance should be seen as positive, in-tandem requirements for a successful system. Trust requires, at minimum, two agents: a trustor and trustee. As HCAI robots are not agents, the human system expert is necessary to ground a patient’s trust in the system. In seeking a justifiably trustworthy system, ensuring continuous trust in the system’s experts ought to take priority – this trust can then be extended to said system. Such extensional trust ought to also be evaluated and reflected upon by developers prior to in-situ deployment. If this trust is appropriate and necessary will have an enormous impact on if the HCAI robot use is socially sustainable. When an HCAI robot is present and making impactful decisions, we need to be able to recognize and re-establish in whom patients place their trust. Trust must be appropriately anchored to ensure the deployment of a system is both appropriate and socially sustainable. In securing an anchoring trust-dyad for an AI system deployed in a community, developers move closer to ethical deployment of physically-manifested, robotic Artificial Intelligence that serves care providers and patients alike. In order for a system’s presence in healthcare to be sustainable and continuous, it is crucial that HCAI experts foster said trust-dyad with deployers, and curb unrealistic expectations of their system. It is beyond the scope of this paper to explore if extensional trust in expertise can follow all the functions and satisfy the desiderata of trusting the AI, were it itself the primary agent. Rather, the goal is to demonstrate that extensional trust from the expert-client dynamic, to the physically manifested AI system, is one way to ensure the continuous justifiable trust in and the sustainable use of an HCAI system.
Bibliography
Azevedo-Sa, H. S. K. Jayaraman, X. J. Yang, L. P. Robert and D. M. Tilbury, "Context-Adaptive Management of Drivers’ Trust in Automated Vehicles," in IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 6908-6915, Oct. 2020.
Baier, Annette. "Trust and antitrust." Ethics 96, no. 2 (1986): 231-260.
Billings, Deborah R., Kristin E. Schaefer, Jessie Y. Chen, Vivien Kocsis, Maria Barrera, Jacquelyn Cook, Michelle Ferrer, and Peter A. Hancock. "Human-animal trust as an analog for human-robot trust: A review of current evidence." DTIC Final Report (2012).
Binns, Reuben. "Algorithmic accountability and public reason." Philosophy & technology 31, no. 4 (2018): 543-556.
Brannigan, Michael C. Caregiving, Carebots, and Contagion. Rowman & Littlefield, 2022. Coeckelbergh, Mark. "Can we trust robots?." Ethics and information technology 14, no. 1 (2012): 53-60.
Johnson, Collin, and Benjamin Kuipers. "Socially-aware navigation using topological maps and social norm learning." In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 151-157. 2018.
Kirkpatrick, Jesse, Erin N. Hahn, and Amy J. Haufler. "TRUST AND HUMAN–ROBOT INTERACTIONS."Robot Ethics 2.0: from Autonomous Cars to Artificial Intelligence (2017): 142-156.
LaRosa, Emily, and David Danks. "Impacts on trust of healthcare AI." In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 210-215. 2018.
Leiber, Justin. Can Animals and Machines be Persons?: A Dialogue. Hackett Publishing, 1985. Richards, Neil M., and William D. Smart. "How should the law think about robots?." In Robot law. Edward Elgar Publishing, 2016.
Robinette, Paul, Wenchen Li, Robert Allen, Ayanna M. Howard, and Alan R. Wagner. "Overtrust of robots in emergency evacuation scenarios." In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 101-108. IEEE, 2016.
Roff, Heather M., and David Danks. "“Trust but Verify”: The difficulty of trusting autonomous weapons systems." Journal of Military Ethics 17, no. 1 (2018): 2-20.
Rosenthal-von der Pütten, Astrid M., Nicole C. Krämer, Laura Hoffmann, Sabrina Sobieraj, and Sabrina C. Eimler. "An experimental study on emotional reactions towards a robot." International Journal of Social Robotics 5, no. 1 (2013): 17-34.
Rosenthal-Von Der Pütten, Astrid M., Frank P. Schulte, Sabrina C. Eimler, Sabrina Sobieraj, Laura Hoffmann, Stefan Maderwald, Matthias Brand, and Nicole C. Krämer. "Investigations on empathy towards humans and robots using fMRI." Computers in Human Behavior 33 (2014): 201-212.
Ryan, Mark. "In AI we trust: ethics, artificial intelligence, and reliability." Science and Engineering Ethics 26, no. 5 (2020): 2749-2767.
Triest, Samuel, Matthew Sivaprakasam, Sean J. Wang, Wenshan Wang, Aaron M. Johnson, and Sebastian Scherer. "TartanDrive: A Large-Scale Dataset for Learning Off-Road Dynamics Models." arXiv preprint arXiv:2205.01791 (2022).
Williams, Tom, Collin Johnson, Matthias Scheutz, and Benjamin Kuipers. "A tale of two architectures: A dual-citizenship integration of natural language and the cognitive map." In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, pp. 1360-1368. 2017.