Speaker-Bio: Tony Belpaeme is Professor at Ghent University and Visiting Professor of Cognitive Systems and Robotics at Plymouth University. He is a member of IDLab – imec at Ghent and is associated with the Centre for Robotics and Neural Systems at Plymouth. His research interests include social systems, cognitive robotics, and artificial intelligence in general.
Until April 2005 he was a postdoctoral fellow of the Flemish fund for scientific research (FWO Vlaanderen) in the Artificial Intelligence Laboratory at the Vrije Universiteit Brussel. He held a guest professorship at the same university, where he taught introductory artificial intelligence and autonomous systems.
Starting from the premise that intelligence is rooted in social interaction, Tony and team try to further the science and technology behind artificial intelligence and social robots. This results in a spectrum of results, from theoretical insights to practical applications. The theoretical insights, in which he argues that interaction is central to natural and artificial cognition and that robots and machines should be sensitive to language and paralinguistic social mechanisms used by people, has drawn considerable academic attention. He complements his work by applying these insights in the design and implementation of robots and robotic applications.
His research is regularly used as a showcase of funding success by funding agencies, most recently the Research Councils UK, Engineering and Physical Sciences Research Council (EPSRC) and the European Commission. The combination of both theoretical cognitive systems research applied to topics with societal relevance has gained him an international reputation. His research has been exhibited at the Natural History Museum London, the Wellcome Trust, the London Science Museum, and the National Space Centre. He has featured in IEEE Spectrum, the Communications of the ACM, and Scientific American. In 2012 his work was named as one of “ten life-changing ideas under research at UK universities” by Research Councils UK, and in 2014 his work was lauded as one of “20 new ideas from the UK that will change the world“.
Title: Trust in me, just in me: social robots, AI and trust
Abstract: As a HRI researcher you subscribe to the idea that one day we will be able to build fully autonomous social robots. As such, these robots will need to have AI that powers their autonomy. This AI does not yet exist, but will increasingly not only drive the social cognition of the robot, but will also be key to our attitudes towards the robot. This talk will speculate on the challenges faced in building AI-driven HRI, and will consider whether trust and ethics are different when embodied on a social robot, and will make a case that the social character of robots requires particular attention from researchers, designers, customers and policy makers.
Speaker-Bio: Kristin E. Schaefer is the Chief for the Autonomous Systems Branch of DEVCOM Army Research Laboratory. Dr. Schaefer-Lay has led research efforts for ARL in the areas of trust, robotics, and autonomy for the Robotics Collaborative Technology Alliance, Applied Robotics for Installations and Base Operations, Army Wingman Program, and Human-Autonomy Teaming Essential Research Program. She now leads a team that specializes in robotics and systems engineering across wheels, wings, and appendages. She has over 70 publications and 90 presentations in this and associated areas of research. She holds a PhD and MS in Modeling & Simulation from the University of Central Florida, and BA in Psychology from Susquehanna University.
Contact Information:
Dr. Kristin E Schaefer-Lay, kristin.e.schaefer-lay.civ@mail.mil
301-518-0053
Title: Trust Metrics and Evaluations in Human-Autonomy Teams
Abstract: Trust in autonomy is a complex topic that is further compounded when developing effective human-autonomy teams. To unravel this complexity, this talk begins with a review of those operational definitions of trust and the critical features related to the human, the machine, and the information. The second portion of this talk takes a multi-method approach to trust measurement. In order to quantify trust appropriately, it is important to know what measure or set of measures to use given the conditions the team is operating. It is important to understand that not all trust is created equal and not all measures will calculate trust accurately. The situation matters. Finally, this talk will conclude with a review of three different technologies that support appropriate trust measurement, analysis, calibration, and management.
Speaker-Bio: John Danaher is a lecturer in the Law School. He holds a BCL from University College Cork (2006); an LLM from Trinity College Dublin (2007); and a PhD from University College Cork (2011). He was lecturer in law at Keele University in the UK from 2011 until 2014. He joined NUI Galway in July 2014.
John's research focuses on the ethical, legal and social implications of new technologies. He maintains a blog called Philosophical Disquisitions, and produces a podcast with the same title. He also writes for the Institute for Ethics and Emerging Technologies.
Free, open-access pre-prints of his academic papers can be found on Philpapers, Researchgate and Academia.
Title: The Ethics of Deception in the Design of Social Robots
Abstract: If a robot sends a deceptive signal to a human user, is this always and everywhere an unethical act, or might it sometimes be ethically desirable? This talk will try to clarify and refine our understanding of the ethics of robotic deception. It does so by making four arguments. First, it argues that we need to distinguish between three main forms of robotic deception (external state deception; superficial state deception; and hidden state deception) in order to think clearly about its ethics. Second, it argues that the second type of deception – superficial state deception – is not best thought of as a form of deception, even though it is frequently criticised as such. Third, it argues that the third type of deception is best understood as a form of betrayal because doing so captures the unique ethical harm to which it gives rise, and justifies special ethical protections against its use. And fourth, it argues that perceptions of deception affect the trustworthiness of robots which can, in turn, have serious impacts on our social ethical system which is premised on trust.