Here a list of research projects in which I am involved or I have been involved
On-going projects
European University Alliance Program
Supported by EU Commission and the Swiss Agency MOVETIA
Head of the Swiss Unit: Dott. Alessandro Facchini, SUPSI-DTI, Senior Lecturer Researcher, Dalle Molle Institute for Artificial Intelligence (IDSIA USI-SUPSI).
Role: Vice-Head of the Swiss unit and Researcher
Started: 01/01/2025
EUonAIr represents a pivotal initiative in this landscape of European university alliances. Comprising ten leading universities specializing in economics, business, management, and technology, EUonAIr is dedicated to pioneering responsible AI integration in curricula, developing Smart University concepts, and enhancing AI-assisted mobility experiences. This alliance is part of the European Commission’s broader effort, particularly through the Erasmus+ European Universities initiative, which aims to consolidate Europe’s position as a leader in higher education.
Supported by Hasler Stiftung
Hosted by Software Institute, University of Italian Switzerland, Lugano, CH.
PI: Dott. Pietro Barbiero, Researcher, Computer Systems Institute, USI, Lugano
Role: Co-PI
Started: 01/12/2024
Contemporary deep learning (DL) models excel in approximating complex functions by discovering fine-grained correlations among features. Yet, DL models struggle to generalize in out-of-distribution settings as they fall short in identifying causal relationships. This small project explores the possibility to construct a unified DL architecture supporting causal inference queries and thus producing more robust, explainable, and generalisable predictions. This architecture will combine Retrieval Augmented Generative techniques to learn a causal graph from the analysis of relevant scientific literature, and Deep Concept Reasoning models to infer structural equations making the underlying causal mechanisms explicit. The project will perform a preliminary exploration of this approach, with the aim of both testing its feasibility and drafting an SNFS grant proposal to further develop and scale it to real-world applications.
Supported by Horizon Europe grant n^ 101147693
Hosted by IDSIA USI-SUPSI
Local Supervisor: Prof. Francesco Flammini
Role: Team-member Researcher
started: 01/06/2024
AutoMoTIF is an European project focusing on the development of strategies, governance models, and synergies enabling seamless integration and interoperability of automated transport systems in Europe. SUPSI researchers participate to the project through the Dalle Molle Institute for Artificial Intelligence (IDSIA USI-SUPSI) by leading the AI team working on the analysis and optimization of inter-modal logistic terminals, in collaboration with the DSP company in Manno.
Supported and hosted by Instute für Geschichte un Ethik der Medizin, Munich University of Technology
PI: Prof. Marcello Ienca - Assistant Professor in Ethics of Artificial Intelligence and Neuroscience, Faculty of Medicine, TUM.
Role: Adjunct Research Fellow
started: 01/03/2024
Supported by Department of Innovative Technologies, University of Applied Sciences and Arts of Southern Switzerland (SUPSI).
Hosted by IDSIA USI-SUPSI
PI: Alessandro Facchini - Senior Lecturer-Researcher IDSIA USI-SUPSI
Role: Team-member Researcher
Started: 01/01/2023, joined: 01/01/2024
Related results:
*equal contribution.
Completed Projects
Supported by the ERA-NET NEURON grant JTC2020-ELSA:HYBRIDMIND
hosted by: Intelligent Stystems Ethics Groups, College of Humanitities, EPFL, Lausanne
Local Leader: Prof. Marcello Ienca - Assistant Professor in Ethics of Artificial Intelligence, TUM-EPFL
Role: Postdoctoral Research Fellow
Started: 01/01/2021, Joined: 01/03/202; Completed: 31/12/2024
Intelligent neuroprostheses represent the next phase in the evolution of devices integrated with the brain to assist or alter human sensory, motor, cognitive, and affective capacities. These devices include "read-out" systems which detect, interpret, and translate neural signals for applications such as allowing a paralyzed person to move a robotic arm or cursor. They also include “write-in” systems which deliver signals or stimulation to the brain to affect thinking, emotions, and the ability to move. What makes a neuroprosthesis intelligent is that it incorporates artificial intelligence (AI) technologies to better adapt to the brain's activity. This is characterized by mutual adaptation where both the "user" and the device continuously change in response to each other over time.
The rate of development of AI-based neurotechnologies is far outpacing our understanding of its ethical consequences. It's also pushing past their limits legal regimes whose job it is to regulate such technologies. The incorporation of AI elements like deep learning can make it hard to predict and control these systems, and even hard to understand what they are doing at all. This raises unprecedented transparency and accountability issues. Mixed in are the psychological implications of write-in devices which can directly influence the cognition (mind?) of the owner. The Hybrid Minds project aims to lay the foundation for a unified theoretical approach to the ethical-legal assessment of intelligent neuroprostheses. The approach is informed by the experiences and perspectives of users as well as dialogue with the neuroengineering community and other key stakeholders.
Related results:
supported by the Hasler Stiftung grant n^ 22050
hosted by IDSIA USI-SUPSI
PI: Alessandro Antonucci, Senior Lecturer Researcher, IDSIA USI-SUPSI.
Role: Team-member Assistant Researcher
Started: 01/02/2023; Completed 31/01/2024
Explainable Artificial Intelligence (XAI) includes a variety of techniques to explain how and why black box machine learning (ML) models generate certain predictions. Counterfactual explanation (CE) methods are one of such techniques. These are based on the generation of counterfactual instances telling users how to change the model's input to obtain a certain desired outcome. Depiste their popularity, CEs suffer from a major limitation that lies in the fact that the explanations they produce are not genuinely causal. The goal of MaLESCaMo is to overcome this limitation by developing a new XAI methodology based on the use of surrogate causal models.
Related results:
PhD project founded by the University of Milan, School of Humanities
hosted by Logic Uncertainty Computation and Information (LUCI) lab, Department of Philosophy, University of Milan.
Supervisor: Prof. Giuseppe Primiero, Associate Professor of Logic, UniMI.
Started: 01/10/2019 - Completed: 31/12/2023
Contemporary society is increasingly dependent on the use of autonomous systems. Ensuring that these systems function correctly and do not engage in undesirable behaviour is therefore of vital importance. Probabilistic model checking is one of the best known ways of verifying that an autonomous system is functioning correctly. Although this area of research has undergone considerable advances in recent years, there are several developments and applications that have not yet been explored. This project focuses on investigating three of these developments and applications that are of particular relevance to the field of artificial intelligence.
Related results:
Termine, A. (2023) Probabilistic Model Checking with Markov Models Semantics. PhD Thesis, UniMI.