For the complete list of my publications, here is my Google Scholar
For the complete list of my publications, here is my Google Scholar
Manganini, C. 2026. Bias and Miscomputation. A Philosophical and Formal Framework for Machine Learning Unfairness (PhD Thesis). https://air.unimi.it/handle/2434/1232464.
Abstract. As Machine Learning (ML) systems are increasingly being used in critical domains, the need for a coherent framework to account for the discriminatory effects of their outcomes becomes urgent. In computer science, existing approaches tend to emphasise the role of ML design, presenting algorithmic fairness as a matter of making better design choices, especially at the level of the data used to train the model. This dissertation challenges the adequacy of such a design-centric perspective on the problem of unfair ML predictions. It does so by reconnecting it with broader and longer-standing issues within the philosophy of computational artefacts that have largely been overlooked in the current debate in the ethics of artificial intelligence. The primary contribution of this thesis is to reframe the analysis of algorithmic discrimination around the notions of use, maintenance, and repair of ML systems. Specifically, I argue that the correctness criteria of an ML system should be reformulated in terms of the contextual convergence of the diverse normative requirements of the agents who use it. Compared to other accounts of ML normativity, this reconceptualisation avoids succumbing to scepticism about implementation ascriptions, while returning a more dynamic and realistic understanding of how normative requirements circulate, feed back, conflict, and adapt across complex ML systems. Crucially, I claim that this shift has the advantage of allowing for a richer understanding of algorithmic fairness, viewing it as a plurality of data repair practices, rather than a static value embodied by certain ML designs. Two main formal contributions follow from this analysis: the introduction of validity criteria for ML predictions and the development of a novel logical framework to reason about the impact of errors in the input data on the fairness of algorithmic outcomes.
Heilmann, X., Manganini, C., Cerrato, M., Kestel, L., & Belle, V. (2026). A Neurosymbolic Approach to Counterfactual Fairness. Neurosymbolic Artificial Intelligence, 2, 29498732261443184. https://doi.org/10.1177/29498732261443184.
Abstract. Integrating fairness into machine learning models has been an important consideration for the last decade. Here, neurosymbolic models offer a valuable opportunity, as they allow the specification of symbolic, logical constraints that are often guaranteed to be satisfied. However, research on neurosymbolic applications to algorithmic fairness is still in an early stage. In this work, we bridge this gap by integrating counterfactual fairness into the neurosymbolic framework of logic tensor networks (LTN). We use LTN to express accuracy and counterfactual fairness constraints in first-order logic and employ them to achieve desirable levels of both performance and fairness at training time. Our approach is agnostic to the underlying causal model and data generation technique; for this reason, it may be easily integrated into existing pipelines that generate and extract counterfactual examples. We show, through concrete examples on three benchmark datasets, that logical reasoning about counterfactual fairness has some important advantages, among which its intrinsic interpretability, and its flexibility in handling subgroup fairness. Compared to three recent methodologies in counterfactual fairness, our experiments show that a neurosymbolic, LTN-based approach attains better levels of counterfactual fairness.
Manganini, C., Corsi, E. A., & Primiero, G. (2026). Data Speak but Sometimes Lie: A Game-Theoretic Approach to Data Bias and Algorithmic Fairness. International Journal of Approximate Reasoning, 109608. https://doi.org/10.1016/j.ijar.2025.109608.
Manganini, C., Primiero, G. (2025). Defining Formal Validity Criteria for Machine Learning Models. In: Durán, J.M., Pozzi, G. (eds) Philosophy of Science for Machine Learning. Synthese Library, vol 527. Springer, Cham. https://doi.org/10.1007/978-3-032-03083-2_14.
Manganini, C. (2025). A Sceptical Paradox for Computational Artefacts. Philosophical Inquiries. Forthcoming. [Best paper in Metaphysics at SIFA 2025]
Buda, A. G., Manganini, C., & Primiero, G. (2025). A Philosophical Framework for Data-Driven Miscomputations. Philosophies, 10(4), 88. https://doi.org/10.3390/philosophies10040088.
Manganini, C., Primiero, G. (2024). Reasoning With and About Bias. In: Hosni, H., Landes, J. (eds) Perspectives on Logics for Data-driven Reasoning. Logic, Argumentation & Reasoning, vol 35. Springer, Cham. https://doi.org/10.1007/978-3-031-77892-6_7.
Algorithmic Fairness as a Repair Practice