Hello,

I am a tenure-track researcher at the Swiss AI lab IDSIA USI-SUPSI , in Lugano (CH),  where I am a member of the Responsabile AI and Society research area.

I am a member of Interpretable Deep Learning  community, actively promoting research in concept-based interpretability around the globe.

My actual research includes:

In the past, I was also active in the field of:


Most Relevant Recent Publications:

Concept Based Interpretability:

De Felice, G., Casanova Flores A., De Santis F., Schneider J., Santini S., Barbiero P., Termine A. (2025) Causally Reliable Concept Bottleneck Models. Accepted at NeurIPS 2025

Philosophy of Science:

Termine, A., Ratti, E., & Facchini, A. (2026). Machine learning and theory-ladenness: a phenomenological account. Synthese, 207(3), 94.

Responsible AI in Biomedicine and Education:

Negrini, L., Lamacchia, M., Carruba, M. C., Delucchi, E., Babazadeh, M., Mangili, F., & Termine, A. (2025). Exploring the Role of Professional Development in Fostering AI Competence Among Teachers in Southern Switzerland. In Workshop on Artificial Intelligence with and for Learning Sciences: Past, Present, and Future Horizons (pp. 110-120). Cham: Springer Nature Switzerland.