Tutorial on Formal Methods for Machine Ethics @ ECAI 2023
Brief description of the tutorial
The purpose of the tutorial is to provide an overview on the use of formal methods developed in AI, including logic, automated planning and game theory, for modeling ethical concepts and for endowing AI systems (e.g., a classifier system, a social robot, a conversational agent) with the capacity to behave ethically, to make ethical decisions and to learn under ethical constraints. The tutorial is structured in two parts. The first part is devoting to illustrating some crucial concepts in machine ethics including i) the concepts of ethical value and evaluation and their role in decision-making, ii) the concept of responsibility both in its causal and agentive form, iii) the concept of moral emotion with special emphasis on guilt. The second part is devoted to showing how formal methods can be used to formalize these concepts and to incorporate them into the reasoning and decision-making processes of an AI system.
Presenter
General information
Name: Emiliano Lorini
Affiliation: IRIT, CNRS, Université Toulouse III Paul Sabatier, France
E-mail address: Emiliano.Lorini@irit.fr
Web page: https://www.irit.fr/~Emiliano.Lorini/
Short bio
Emiliano Lorini is CNRS senior researcher and co-head of the LILaC team (Logic, Interaction, Language and Computation) at the Institut de Recherche en Informatique de Toulouse (IRIT), Université Paul Sabatier, France. His main expertise is in the development of formal languages and semantics, based on logic and game theory, for modeling the reasoning and decision-making of both human and artificial agents as well as several aspects of social interaction such as the concepts of norm, trust, responsibility, power, persuasion and social influence. He focuses on the axiomatic and complexity aspects and on the decision procedures for such languages and semantics (e.g., for satisfiability checking, model checking and planning) in order to automate the reasoning and decision-making of artificial agents that are designed to interact and communicate with other (artificial or human) agents. His work has a distinctly interdisciplinary perspective in strong interaction, both at the conceptual and formal level, with the models of reasoning, decision and interaction developed in philosophy, law and economics. He extensively worked on the use of formal methods for building ethical machines. The following are some of his publications on this topic.
Grandi, U., Lorini, E., Parker, T. (2023). Moral Planning Agents with LTL Values. In Proceedings of 32nd International Joint Conference on Artificial Intelligence (IJCAI 2023), ijcai.org, forthcoming.
Grandi, U., Lorini, E., Parker, T., Alami, R. (2022). Logic-Based Ethical Planning. In Proceedings of the 21st International Conference of the Italian Association for Artificial Intelligence (AIxIA 2022), LNCS, volume 13796, Springer-Verlag, pp. 198-211.
Parker, T., Grandi, U., Lorini, E., Clodic, A., Alami, R. (2022). Ethical Planning with Multiple Temporal Values. In Proceedings of the Robophilosophy Conference 2022- Social Robots in Social Institutions, Frontiers of Artificial Intelligence and Applications series, IOS Press, pp. 435-444.
Lorini, E. (2021). A Logic of Evaluation. Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS 2021), ACM, pp. 827-835.
Lorini, E., Mühlenbernd, R. (2018). The long-term benefits of following fairness norms under dynamics of learning and evolution. Fundamenta Informaticae, 158(1-3), pp. 121-148.
Lorini, E. (2016). A logic for reasoning about moral agents. Logique & Analyse, 58(230), pp. 177-218.
Lorini, E., Mühlenbernd, R. (2015). The long-term benefits of following fairness norms: a game-theoretic analysis. In Proceedings of the 18th Conference on Principles and Practice of Multi-Agent Systems (PRIMA 2015), LNCS, Springer-Verlag, Berlin, pp. 301-318.
Lorini, E., Longin, D., Mayor, E. (2014). A logical analysis of responsibility attribution: emotions, individuals and collectives. Journal of Logic and Computation, 24(6), pp. 1313-1339.
Gaudou, B., Lorini, E., Mayor, E. (2013). Moral guilt: an agent-based model analysis. In Proceedings of the 9th Conference of the European Social Simulation Association (ESSA 2013), Series “Advances in Social Simulation-Advances in Intelligent Systems and Computing”, Vol. 229, Springer, pp. 95-106.
Lorini, E. (2012). On the logical foundations of moral agency. In Proceedings of the Eleventh International Conference on Deontic Logic in Computer Science (DEON 2012), Lecture Notes in Computer Science, vol. 7393, Springer-Verlag, Berlin, pp. 108-122.
Lorini, E. (2011). From self-regarding to other-regarding agents in strategic games: a logical analysis. Journal of Applied Non-Classical Logics, 21 (3-4), pp. 443-476.