Projects

Here a list of research projects in which I am involved or I have been involved

On-going projects

AuToMoTIF

Supported by Horizon Europe grant n^ 101147693

Hosted by IDSIA USI-SUPSI

Local Supervisor: Prof. Francesco Flammini

Role: Team-member Researcher

perspective starting date: June 2024

Ethics of Predictive AI in Psychiatry

Supported and hosted by Instute für Geschichte un Ethik der Medizin, Munich University of Technology

Supervisor: Prof. Marcello Ienca - Assistant Professor in Ethics of Artificial Intelligence, TUM.

Role: Adjunct Research Fellow 

started: 01/03/2024

Hybrid Minds

Supported by the ERA-NET NEURON grant JTC2020-ELSA:HYBRIDMIND

hosted by: Intelligent Stystems Ethics Groups, College of Humanitities, EPFL, Lausanne

Local Leader: Prof. Marcello Ienca - Assistant Professor in Ethics of Artificial Intelligence, TUM-EPFL

Role: Postdoctoral Research Fellow

started: 01/01/2021, joined: 01/03/2024

Web-page  

Intelligent neuroprostheses represent the next phase in the evolution of devices integrated with the brain to assist or alter human sensory, motor, cognitive, and affective capacities. These devices include "read-out" systems which detect, interpret, and translate neural signals for applications such as allowing a paralyzed person to move a robotic arm or cursor. They also include “write-in” systems which deliver signals or stimulation to the brain to affect thinking, emotions, and the ability to move. What makes a neuroprosthesis intelligent is that it incorporates artificial intelligence (AI) technologies to better adapt to the brain's activity. This is characterized by mutual adaptation where both the "user" and the device continuously change in response to each other over time.

The rate of development of AI-based neurotechnologies is far outpacing our understanding of its ethical consequences. It's also pushing past their limits legal regimes whose job it is to regulate such technologies. The incorporation of AI elements like deep learning can make it hard to predict and control these systems, and even hard to understand what they are doing at all. This raises unprecedented transparency and accountability issues. Mixed in are the psychological implications of write-in devices which can directly influence the cognition (mind?) of the owner. The Hybrid Minds project aims to lay the foundation for a unified theoretical approach to the ethical-legal assessment of intelligent neuroprostheses. The approach is informed by the experiences and perspectives of users as well as dialogue with the neuroengineering community and other key stakeholders.

B4EAI - Best for Ethical AI

Supported by  Department of Innovative Technologies, University of Applied Sciences and Arts of Southern Switzerland (SUPSI).

Hosted by IDSIA USI-SUPSI

PI: Alessandro Facchini - Senior Lecturer-Researcher IDSIA USI-SUPSI

Role: Team-member Researcher 

Started: 01/01/2023, joined: 01/01/2024 

Completed Projects

MaLESCaMo

supported by the Hasler Stiftung grant n^ 22050

hosted by IDSIA USI-SUPSI

PI: Alessandro Antonucci, Senior Lecturer Researcher, IDSIA USI-SUPSI.

Role: Team-member Assistant Researcher 

Started: 01/02/2023 - Completed 31/01/2024

Explainable Artificial Intelligence (XAI) includes a variety of techniques to explain how and why black box machine learning (ML) models generate certain predictions. Counterfactual explanation (CE) methods are one of such techniques. These are based on the generation of counterfactual instances telling users how to change the model's input to obtain a certain desired outcome. Depiste their popularity, CEs suffer from a major limitation that lies in the fact that the explanations they produce are not genuinely causal. The goal of MaLESCaMo is to overcome this limitation by developing a new XAI methodology based on the use of surrogate causal models.

Related results:

Termine, A., Antonucci, A., Facchini, A. (2023) Machine Learning Explanations by Surrogate Causal Models (MaLESCaMo). In 1st XAI World Conference, XAI 2023, Lisbon, Portugal, July 26-28, 2023, Proceedings (late-breaking works and demo). Communications in Computer and Information Science, Cham: Springer International Publishing. forthcoming.


Probabilistic Model Checking with Markov Models Semantics: New developments and Applications

PhD project founded by the University of Milan, School of Humanities

hosted by Logic Uncertainty Computation and Information (LUCI) lab, Department of Philosophy, University of Milan.

Supervisor: Prof. Giuseppe Primiero, Associate Professor of Logic, UniMI.

Started: 01/10/2019 - Completed: 31/12/2023

Contemporary society is increasingly dependent on the use of autonomous systems. Ensuring that these systems function correctly and do not engage in undesirable behaviour is therefore of vital importance. Probabilistic model checking is one of the best known ways of verifying that an autonomous system is functioning correctly. Although this area of research has undergone considerable advances in recent years, there are several developments and applications that have not yet been explored. This project focuses on investigating three of these developments and applications that are of particular relevance to the field of artificial intelligence.

Related results:

Termine, A. (2023) Probabilistic Model Checking with Markov Models Semantics. PhD Thesis, UniMI.

Termine, A., Antonucci, A., Primiero, G., & Facchini, A. (2023). Imprecise Probabilistic Model Checking for Stochastic Multi-agent Systems. SN Computer Science, 4(5), 443.

Termine, A., Primiero, G., & D’Asaro, F. A. (2021). Modelling accuracy and trustworthiness of explaining agents. In Logic, Rationality, and Interaction: 8th International Workshop, LORI 2021, Xi'ian, China, October 16-18, 2021, Proceedings 8 (pp. 232-245). Springer International Publishing.

Termine, A., Antonucci, A., Primiero, G., & Facchini, A. (2021). Logic and model checking by imprecise probabilistic interpreted systems. In Multi-Agent Systems: 18th European Conference, EUMAS 2021, Virtual Event, June 28–29, 2021, Revised Selected Papers 18 (pp. 211-227). Springer International Publishing.

Termine, A., Antonucci, A., Facchini, A., & Primiero, G. (2021, August). Robust model checking with imprecise Markov reward models. In International Symposium on Imprecise Probability: Theories and Applications (pp. 299-309). PMLR.