For a list of my publications please visit my Google Scholar

Published

Abstract. In recent years, the problem of evaluating the trustworthiness of machine learning systems has become more urgent than ever. A directly related issue is that of assessing the fairness of their decisions. In this work, we adopt a primarily logical perspective on the topic, by trying to highlight the basic logical characteristics of the inferential setting in which a biased prediction occurs. To do so, we first identify and formalise four key desiderata for a logic capable of modelling the behaviour of a biased system, namely: skewness, dependency on data and model, non-monotonicity, and the existence of a minimal distinction between types of bias. On this basis, we define two metrics, one for group and one for individual fairness. 

Accepted

Abstract. Until now, research on neurosymbolic applications for fairness has focused on group fairness, neglecting individual-based notions such as counterfactual fairness. In our work, we bridge this gap by exploring an approach for integrating counterfactual fairness into the Logic Tensor Networks (LTN) framework. Concretely, we iteratively impose counterfactual fairness constraints on our model in order to achieve desirable levels of performance and fairness. Our approach consists of a continual-learning pipeline with an (optional) human-in-the-loop. Furthermore, it is interpretable, since it uses a symbolic expression to capture the loss function for the fairness criteria. We test our proposal on a real-world dataset to show its general feasibility.

Abstract. The widespread emergence of phenomena of bias is certainly among the most adverse impacts of new data-intensive sciences and technologies. The causes of such undesirable behaviours must be traced back to data themselves, as well as to certain design choices of machine learning algorithms. The task of modelling bias from a logical point of view requires to extend the vast family of defeasible logics and logics for uncertain reasoning with ones that capture some few, fundamental properties of biased predictions. However, a logically grounded approach to machine learning fairness is still at early stages in the literature. In this paper, we discuss current approaches to the topic, formulate general logical desiderata for logics to reason with and about bias, and provide a novel approach. 

Abstract. Validity criteria for traditional deterministic computational systems have been spelled out in terms of accuracy, precision, calibration, verification, and validation. In the context of scientific simulations, related formal validity requirements have been defined for the relation between the mathematical model underlying the target system and the computational model used to simulate it. With machine learning models entering the picture, these considerations need to be reviewed, since the conditions under which this relation can be analyzed have largely changed. This is due to a number of reasons: the target system is no longer an available system object of investigation; the mathematical model is still abstracted from the behavior of the target system, but through the mediation of the computational model; the computational model itself remains largely opaque. Thus, while a prediction may result loosely isomorphic to a given trained machine learning model, the latter's ability to correctly represent reality – and thus to be worthy of being considered epistemologically valid – is still debated. We argue that the underlying relations establishing validity criteria for machine learning models need to be reconsidered in terms of formal relations of probabilistic simulation, and that validation and verification processes for relevant stochastic properties are necessary to this aim. 

In Preparation

Review Work