2023 YOUNG RESEARCHERS Postdoctoral Fellowship funded by FONDAZIONE CARIPLO
Title of the project: Logics for Scientific Inferences (LOGSI)
Code: 2023-0978
Running of the Project: 01.10.2024 - 30.09.2027.
The aim of LOGSI is to formalise central methods of scientific inferences in a logical setting. In life and social sciences there are different interpretations and understandings of scientific evidence, and what can be inferred from the same experiment can be contradictory. The consequences of such disagreement are detrimental to science and, ultimately, to public trust in science itself. LOGSI will introduce criteria of logical validity for scientific inference to clarify which inferences we should consider correct. In doing so, this research project will make a central contribution to the methodology of data-driven science (and thus AI) and will help to consolidate public trust in the scientific method.
To properly address the central question of the project, I articulate the research endeavour in five questions.
(Q1) What is the logic of Strong Inference?
Strong Inference is a scientific method widely used in life sciences. The central idea behind Strong Inference is that scientists should develop two or more falsifiable hypotheses that might explain a phenomenon [10]. They then conduct experiments to reject one or more of these hypotheses and 'recycle' the process until only one hypothesis is retained. To answer Q1, I will use techniques from game theory [9] to capture the central aspect of the generation and subsequent elimination of candidate hypotheses. Following an approach similar to [3], I will introduce a graded consequence relation, a system of logical rules, and I will prove a completeness theorem.
(Q2) What is the logic of the Null Hypothesis Significance Testing (NHST)?
NHST is a statistical method used to provide evidence for an effect and it is extensively used in psychology, biology and social sciences. NHST can be thought of as a special case of Strong Inference. It involves formulating only one hypothesis referred to as the null hypothesis (H0) and asking whether, or to what extent, the data is consistent with H0. Assuming that H0 is true, the probability of the observed data or more extreme data is computed. If this probability value, called p-value, is small enough, we conclude that H0 is probably false. To answer Q2 I will introduce a graded consequence relation whose intended semantics is the NHST. Then, I will investigate the system of logical rules in relation to which this consequence relation is complete.
(Q3) How can we quantify inconsistency?
Quantifying inconsistency involves measuring the degree of disagreement or contradiction between different pieces of information, data or hypotheses. The p-value can be understood as the degree of inconsistency between the data and H0. The smaller the p-value, the higher the degree of inconsistency between the data and H0. Thus, the degree of the consequence relation introduced in Q2 can be instantiated with the degree of inconsistency between the data and H0. To do this, I will use Scoring Rules [6] and the geometric approach of [5].
(Q4) Can we formalise scientific inference in Argumentation Theory?
Argumentative frames can be used to define non-monotonic consequence relations, i.e. consequence relations that allow for the retraction of conclusions in light of new evidence (premises). Once the arguments are embedded in a logical structure, they are defined as logical argumentation frames. Considering [4] as a starting point, I will introduce and investigate the properties of a class of logical argumentation frames suitable for formalising scientific inference.
(Q5) Can we produce formal explanations for scientific inference through argumentation?
Using the argumentative frameworks introduced in Q4 and the recent approach of [1,2], I will produce formal explanations of what has been inferred. Explainable Artificial Intelligence (XAI) is an emerging research area that aims to develop AI systems that can provide good explanations for their decisions. Thus, XAI allows users to understand and trust the AI systems they use. LOGSI aims to use notions from logical argumentation theory to develop an XAI system for scientific inference.
LOGSI will advance the state of the art in the methodology of reasoning by formalising two prominent kinds of scientific inference and providing a new XAI system for these inferences based on logical argumentation theory.
Bibliographical References
[1] Arieli, O., Borg, A., Hesse, M., Straßer, C., Explainable Logic-Based Argumentation. In F. Toni, S. Polberg, R. Booth, M. Caminada, & H. Kido (Eds.), Computational Models of Argument. Frontiers in Artificial Intelligence and Applications, 353:32-43, 2022.
[2] Arieli, O., Straßer, C., Deductive argumentation by enhanced sequent calculi and dynamic derivations. Electronic Notes in Theoretical Computer Science, 323:21-37, 2016.
[3] Baldi, P., Corsi, E.A., Hosni, H., Logical Desiderata on Statistical Inference. Forthcoming in Walter Carnielli on Reasoning, Paraconsistency, and Probability, Springer’s series “Outstanding Contributions to Logic”, 1-24, 2023.
[4] Corsi, E.A., Fermüller, C. G., Logical Argumentation Principles, Sequents, and Nondeterministic Matrices. International Workshop on Logic, Rationality and Interaction (LORI), 422-437, 2017.
[5] Corsi, E.A., Flaminio, T., Hosni, H., When Belief Functions and Lower Probabilities are Indistinguishable. Proceedings of Machine Learning Research (ISIPTA), 147:83-89, 2021.
[6] De Finetti, B., Theory of probability, vol. 1, John Wiley & Sons, 1974.
[7] Loh, H.W., Ooi, C.P. Seoni, S., Barua, P.D., Molinari, F., Acharya, U.R., Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011–2022). Computer Methods and Programs in Biomedicine,
226:107-161, 2022.
[8] Longo, L., Argumentation for knowledge representation, conflict resolution, defeasible inference and its integration with machine learning. Machine Learning for Health Informatics: State-of-the-Art and Future Challenges, 183-208, 2016.
[9] Mundici, D., The logic of Ulam’s game with lies. Knowledge, belief and strategic interaction, 275-284, 1992.
[10] Platt, J.R., Strong Inference. Science, 146(3642):347-353, 1964.
OUTREACH:
MeetmeTonight 2024: presentation of the project LOGSI at the EU Corner. (slides)
SEMINARS AND CONFERENCES:
October 2024: " What follows from the available evidence when its uncertainty can be manipulated?" (jww Paolo Baldi, Hykel Hosni and Juergen Landes) LUCI internal seminars. (slides)
February 2025: "What Game Are We Playing? A Game-Theoretic Approach to Data Bias and Machine Learning Unfairness" (jww Chiara Manganini and Giuseppe Primiero), CSL 2025 Workshop on Learning and Logic (LeaLog@CSL) (slides)