10h00 Registration
10h15 Opening
10h30 Joke Meheus - A formal logic for the two different meanings of “because” and its relation to understanding scientific explanation
11h45 Break
12h Stéphanie Ponsar - The Curry-Howard isomorphism and unificationist explanation
12h45 Lunch
14h00 Kit Fine - Truthmaker Semantics for Conditional Imperatives
15h15 Jon Litland - Exact Intuitionistic Modal Logic
16h00 Pause
16h30 Federico L. G. Faroldi - Justifications and Logics of Practical Reasons
17h15 Mario Günther and Holger Andreas - A Strengthened Ramsey Test for Causal Explanations
18h00 End of the first day
9h30 Hannes Leitgeb - HYPE and Possible States Semantics
10h45 Pause
11h15 Michele Lubrano - Difference-making and explanation in mathematics
12h Martin Pleitz - Paradox and the Structure of Explanation
12h45 Lunch
14h00 Michael Dunn - Aboutness and Truthmakers
15h15 Aleksandra Samonek - Deriving explanations from interpretable machine learning models and reliable task solutions in formal learning theory
16h00 Pause
16h30 Alessandro Torza - Ground and Modality
17h15 Peter Verdée - From operational semantics for relevant logic to an exact semantics for classical logic
18h00 End of the second day
9h30 Raymundo Morado - Explanation and non-classical consequence relations
10h45 Break
11h15 Lorenzo Rossi and Carlo Nicolai - Between semantic and metaphysical grounding
12h Ivo Pezlar - Understanding Incorrect Proofs
12h45 Lunch
14h00 Francesca Poggiolesi - Grounding axioms for (relevant) implication
15h15 David Gaytán - Are both sides of a Causal Asymmetry, Explanations after all? GMD on Asymmetries Interaction
16h00 Break
16h30 Friederike Moltmann - Object-Based Truthmaker Semantics
17h45 End of the third day
(in alphabetical order of the first author’s surname)
"Aboutness" and "truthmakers" have become popular topics in logic. My early work on relevance logic involved both of these, as I shall explain. The direct impetus for this talk is a chapter I wrote for a forthcoming book New Essays on the Belnap-Dunn Logic, eds. Heinrich Wansing and Hitoshi Omori, Synthese Library, Springer. The chapter is titled: "Two, Three, Four, Infinity: The Path to the Four-Valued Logic and Beyond," and traces the intellectual history starting from various representations of De Morgan Lattices given in the 1950’s. A representation of my own underlies my 1966 paper: “An Intuitive Semantics for First-Degree Entailments and Coupled Trees.” The idea is that an element of a De Morgan lattice can be interpreted as an ordered pair of sets of situations, the elements of the first being the situations that make it true, and of the second the situations that make it false. This is both a “truthmaker” and “falsitymaker” semantics, and involves the idea that in a given “situation” a sentence can be assigned any subset of the set of the usual two truth values {T, F}, and so (horrors!) a sentence can be both true and false (or neither). Nuel Belnap gave his famous application of this in his 1977 paper ‘How a Computer Should Think,” in which he imagines a database with inconsistent and incomplete information. Incidentally, Belnap also axiomatized first-degree entailments in his 1959 dissertation A Formalization of Entailment.
My 1966 dissertation The Algebra of Intensional Logics contained as its penultimate chapter "An Intuitive Semantics for First-Degree Entailments." However, this was a different “intuitive semantics,” and relied on an isomorphic variant representation of the one I mentioned above. It interprets the elements of a of De Morgan lattices as "proposition surrogates," i.e., as ordered pairs of sets of topics. Rather than saying that a sentence was true about a topic, I instead said that the sentence gave definite positive information about the topic, and instead of saying that a sentence was false about one of these, I said the sentence gave definite negative information about the topic. But what is meant by "definite aboutness"? This is something I “could of/should of” addressed in my dissertation, and I will use my talk as an opportunity explore this and some other “relevant topics.”
I present a hyperintensional framework to reason formally about practical reasons, based on justification logic, and discuss some applications. I will argue that normative reasons are hyperintensional. I describe a family of hyperintensional logics of reasons based on justification logics.
I shall outline a truthmaker semantics and logic for conditional imperatives and indicate how it might be subsumed under a more general theory of the conditional.
The causal asymmetries mainly point out the difficulty of how to integrate, in a formal model of scientific explanation, constraints linked to the interaction between the arguments that are candidates for explanations and the theoretical contexts they could be associated with. In this paper we propose, first, that there is at least one rational sense in which both sides of the asymmetry could be thought as explanations. After that, we propose a construction of a model of scientific explanation that could take into consideration, as explanations, both sides of a causal asymmetry. For this purpose, we use as a basis the GMD formal framework proposed in [Gaytán&D'Ottaviano&Morado, 2018], and the clarification of the notion of scientific explanation that is constructed there. However, we add to the main GMD schema some stronger connection relationships within its proviso conditions, and we add an intensional interpretation of the causal connection assumed in the explanations. With these additions, we maintain a contextualist notion of explanation.
The present paper aims to complement causal model approaches to causal explanation by Woodward (2003), Halpern and Pearl (2005b), and Strevens (2008). It centers on a strengthened Ramsey Test of conditionals: α ≫ γ iff, after suspending judgment about α and γ, an agent can infer γ from the supposition of α (in the context of further background beliefs). Andreas and Günther (2018) used this conditional as starting point of an analysis of ‘because’ in natural language. In what follows, we shall refine this analysis so as to yield a fully fledged account of (deterministic) causal explanation.
This talk will extend the system HYPE of hyperintensional logic and semantics to a possible-states-semantics for various kinds of modalities, including (one type of) grounding.
A truthmaker for a proposition P is exact if it contains nothing irrelevant to P. This paper asks the questions: if it is necessary that P, what are the truthmakers for the proposition that P is necessary? And if it is possible that P, what are the truthmakers for the proposition that P is possible? Building on Fine’s truthmaker semantics for intuitionistic logic I develop a truthmaker semantics for a range of intuitionistic modal logics and establish soundness and completeness results. In the talk I will mainly focus on the philosophical ideas behind the semantics and what those ideas may teach us about the metaphysics of modality.
Mathematicians do not only seek proofs that certain statements hold, they are also interested in explaining why they hold. It is no accident that explanatory proofs are searched for, even when perfectly acceptable proofs are available. Explanatory proofs are preferred for a number of reasons. Among those reasons I pick three: ability to show what the proved statements depend on, suitability for generalization, observance of the right conceptual order. There is no consensus among philosophers over what makes a proof explanatory.
I present an account of intra-mathematical explanation based on the notion of difference-maker. A definition of 'crucial dependence' is given, and a proof of a statement S is deemed as explanatory if S is deduced, in the most direct way, from the axiom on which S crucially depends. Such an axiom is what plays the role of difference-maker. This account of explanation in mathematics also illustrates how the three virtues of explanatory proofs listed before arise from the presence of a difference-makers in a proof.
Providing and understanding explanations is at the core of science communication. When visiting the science museum, five-year olds are told that about 65 million all dinosaurs died because a big meteorite crashed into the earth, thereby changing the climatic conditions so dramatically that they could not survive. Later, the same children watch their physics teacher demonstratively dropping a pen to the ground, meanwhile stating that “objects fall to the ground because there is a force, called gravity, that attracts every object in the universe to every other object”.
In order for scientific communication to fulfill this purpose, we need to be able to establish coherence between different pieces of communicated information. This in turn requires what psycholinguists call “inferencing”: adding all kinds of information to what is explicitly communicated, for the sake of arriving at a coherent picture. Such inferences are driven by background knowledge and expectations created by the communication context, and fall into different categories. Some are related to referential coherence: establishing links between linguistic components and the discourse entities to which they refer (linking “they” to “dinosaurs”, for instance). Others have to do with elaborating on the information thus far (“if all objects fall to the ground because of gravity, this also holds for the pen the teacher dropped”). Still others are meant to establish (various kinds of) relations between the different segments in the communicated information, and are called “bridging inferences”. The relations involved in these bridging inferences can be explanatory, but also temporal, contrastive, argumentative, …
In some cases, bridging inferences are facilitated by explicit markers (“thereby”, “because”, “later”, “for the sake of”, “but”, “although”, “since”, …). In other cases, the relevant relations are left implicit and have to be discovered by the hearer. In all cases, however, hearers make (further) inferences in an attempt to fully understand the relations at hand. This evidently holds true for explanatory relations. When our five-year olds hear the above explanation, they infer from it not only that “all dinosaurs died because the climate conditions changed”, but also that “a meteorite crashed into the earth”, that “this crash happened about 65 million years ago”, that “the meteorite caused the climate conditions to change”. Some may also infer “if meteorites crash into the earth, then all dinosaurs are killed”. Others may infer “if meteorites crash into the earth, then all big animals are killed”, or even, “if meteorites crash into the earth, then all animals are killed”.
The kind of inferences people tend to make on the basis of explanatory relations have been extensively studied by psycholinguists. In line with the general aim of psycholinguistics, these studies are descriptive, not normative. For a normative perspective, it seems obvious one turns to formal logic. After all, formal logic is, par excellence, the discipline that is concerned with the normative study of “inferencing”. There is a catch, however. Despite the fact that formal logicians are particulary fond of studying what one can and cannot derive from “small words” and despite that fact that logic is supposed to be at the heart of explanation (at least since Hempel), up to now, and with very few exceptions, formal logicians have shown very little interest in explanatory relations. The main focus in formal logic is on (small words related to) elaborative inferences, to a lesser extent also on referential inferences, but not at all on bridging inferences, including bridging inferences related to explanation.
The aim of this paper is to approach bridging inferences related to so-called “backward causal connectives” (“because” in English; “want”, “omdat”, and “aangezien” in Dutch; “parce que”, “car”, and “puisque” in French; “weil”, “da” and “denn” in German) from a formal and normative point of view.
By means of examples, I shall show that, in natural languages, causal connectives do not always express causal relations, and may even express relations that are not at all explanatory. I shall also show that in some natural languages different causal connectives “specialize” in different relations. Whereas in Dutch “doordat” can only be used for cause-effect relations, an argument-conclusion relation can, in its backward direction, only be expressed by “want” and “aangezien”, and a reason-action relation in its backward direction only by “omdat”.
In other languages, such as English, this particular specialization is absent. This may explain that an asymmetry occurs in the ease of processing the different relations in English (see Traxler et al. (1997)), but not in Dutch (see Pit (2003)). While (2a) has been shown to take longer to process than (1a), no such difference is observed for their Dutch translations, (2b) and (1b):
(1a) There are holes in Ann’s clothes, because there are moths in her cupboard.
(1b) Er zijn gaten in Anns kleren want er zitten motten in haar kast.
(2a) There are moths in Ann’s cupboard, because there are holes in her clothes.
(2b) Er zitten motten in Anns kast want er zijn gaten in haar kleren.
The argument-conclusion relation as we find it in (2a)-(2b) is called diagnostic (as opposed to causal) in Traxler et al. (1997) and evidential or inferential in Schnieder (2011) (as opposed to explanatory).
I shall present a formal logic for the Dutch “want” in both its senses—the explanatory one from (1a)-(1b) will be explicated by the connective want1 and the diagnostic one from (2a)-(2b) by want2—, and to examine the relation of both connectives with explanation. The logic, which will be called WANT, will be formulated within the framework for abduction from Batens (2017).
After presenting WANT, I shall discuss the relation with inference to the best explanation, and show that the connective want1 from WANT enables one to “summarize” in a single statement the result of an abductive inference, provided the abductive conclusion is (at least defeasibly) accepted to be true. As the implication in WANT is the material one, its application is restricted to noncausal statements. I shall, however, address the question how WANT may be extended to handle causal abductive inferences as well. I shall also compare WANT to the logic BC from Schnieder (2011), which is claimed to be a logic for because in its explanatory sense. I shall argue that, unlike what is claimed by the author, this logic does not provide an explication of the explanatory because from natural language discourse, but that the connective want2 from WANT does.
Batens, D. (2017). Abduction logics illustrating pitfalls of defeasible methods. In Urbaniak, R. and Payette, G., editors, Applications of formal philosophy: the road less travelled, volume 14 of Logic, argumentation & reasoning, page 169–193. Springer, Berlin.
Pit, M. (2003). How to express yourself with a causal connective: Subjectivity and causal connectives in Dutch, German and French, volume 17. Rodopi.
Schnieder, B. (2011). A logic for ‘because’. The Review of Symbolic Logic, 4(3):445–465.
Traxler, M. J., Sanford, A. J., Aked, J. P., and Moxey, L. M. (1997). Processing causal and diagnostic statements in discourse. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23(1):88.
In this talk I give an outline of what I call 'object-based tuthmaker semantics', a development of truthmaker semantics according to which the truthmaking relation applies not only to sentences, but also to modal and attitudinal objects, entities,like obligations, permissions, claims, beliefs, and requests. Object-based truhmaker semantics accounts for various difficulties for the standard semantic analyses of modal sentences and attitude reports.
A traditional strategy to understand explanation has been to address the question of what makes a good explanation by pointing out what information might reduce the level of surprise a fact impresses upon a doxastic agent. This is usually cashed out in terms of what makes the fact an implicit consequence of the purported explanation according to a fixed logical system, usually a classical (Frege-Russell) one. We have come to realize the advantages of expanding our notion of logicality to encompass many forms of inference, both deductive and non-deductive, to construct more realistic models of explanation.
Thanks to this, we can now envision a reverse approach to explanation: instead of characterizing explanation in terms of inference, we can say a good logical system is one that enables us to give a good explanation, that is, a system whose consequence relation enables an agent to reduce the level of surprise produced by a fact, and this reduction is naturally cashed out in terms of what making that fact a plausible consequence of an explanation employing that logic.
An explanation is better the more plausible the consequence relation is between the explanation and the fact to be explained. This notion of plausibility favors non-monotonic approaches and raises the meta-theoretic issue of which system to employ.
A full account of explanation will have to mention something about our semantics for the consequence relation in reference to which we propose something as an explanation. For instance, it is to be expected that a good notion of a good explanation be sensitive to the formulation of each explanation. A logical system can be considered better, other things equal, if it allows us to distinguish between different explanans in terms of just their logical powers, that is, if it exhibits hyperintesionality.
We would also welcome some logical clues in the general direction of the pertinence and economy of explanations even if they need not exhaust the whole import of the relevance of explanans to explanandum, nor of the computational complexity of generating or justifying such explanations.
All this is framed within the general idea that the semantics for a logical system will have to do justice to our intuitions of the plausibility of the explanations that can be constructed in terms of each consequence relation. The more and better explanations a consequence relation allows us to generate, the more a corresponding logical system gains credibility as a generally “good” logical system.
The notion of proof has a strong explanatory ingredient, especially in the tradition of intuitionistic/constructive logic and type theory. Approaches in this tradition have, however, very little to say about how we understand incorrect proofs. Yet dealing with incorrect proofs, examining them and finding and explaining their errors (e.g., formal fallacies) is a natural aspect of proof construction and the corresponding general theory of proofs should be able to address it as well. In this talk, we introduce and demonstrate proof analysis within Transparent Intensional Logic and propose how to explain the semantics of incorrect proofs. The key idea will be to understand proofs as typed algorithmically structured objects that need not be effective, i.e., they can fail to deliver expected results.
Most of the logics of grounding that have so far been proposed contain grounding axioms, or grounding rules, for the connectives of conjunction, disjunction and negation, but little attention has been dedicated to the implication connective. The present talk aims at repairing this situation by proposing adequate grounding axioms for relevant implication. Because of the interaction between negation and implication, new grounding axioms concerning negation will also arise.
The Curry-Howard isomorphism contributed to the precise unification of the notions of proving and computing. In a 1969 paper, Haskell Curry and William Howard showed a deep connection between the systems known as intuitionistic natural deduction and simply λ-typed calculus. Roughly speaking the Curry-Howard isomorphism states that a program does what its corresponding proof says; and conversely, a proof says what its corresponding program does.
After having introduced λ-typed calculus and its key property that terms have unique types, we will state the connection between λ-calculus and natural deduction and be able to enunciate the Curry-Howard isomorphism.
This link between logic and computer science will then be interpreted and discussed in terms of a unificationist account of explanation. One of the ideas of unificationist explanation is that successful unification of different “phenomena” seems to be expected from a good explanation because unification theories play an important role in science. Our interpretation of the Curry-Howard isomorphism in terms of unificationist explanation is based on the works of Friedman, Kitcher and Salmon.
Although it has often led through harsh terrain, paradox has been a good guide in the analysis of important concepts. Just think of the role Russell’s paradox has played for the development of our concept of set, the Liar paradox for the theory of truth as well as proposed revisions of our notion of consequence, and Zeno’s paradoxes for the theory of the continuum and thus for the metaphysics of time, space, and movement. Can attention to these and other paradoxes be similarly helpful for our understanding of explanation? That is the question I want to explore in my presentation. I will concentrate on the more specific case of metaphysical explanation, a.k.a. grounding. Thus the technical tools developed in recent work on the logic of grounding become available, allowing to characterize its structural features in a precise way.
This amounts to developing an approach in the study of grounding that is still new. Of course, grounding already has its own puzzles, and they provide a welcome starting point. But as of now, these puzzles and other paradoxes have played only a minor role in the debate about how grounding should be analyzed. I propose to change that and put paradox center stage in the study of explanation and of grounding in particular, especially in the study of its structural features. Fortunately, to use paradox as a guide in the analysis of grounding we need not try to invent new paradoxes that concern grounding and no other notions, because it will turn out that we need only look at the grounding side of the well-known paradoxes of sets, truth, space, and so on to discover a whole range of new puzzles of grounding. These will in turn provide new arguments concerning several important properties of grounding, in particular concerning the important and controversial question whether grounding is foundational, and if so, then precisely in what sense. Adopting this novel methodology for the study of grounding will thus be rewarding. As a spin-off, there will also be ground-theoretic insights relevant to the understanding of the paradoxes.
According to the picture provided by semantic grounding, when it comes to determining the semantic values of sentences, some sentences are fundamental. More precisely, the atomic sentences of the base language (i.e. the language only containing non-semantic predicates) constitute the primitive stock of truths and falsities. Examples include truths such as «2 + 2 = 4» and falsities such as «grass is red». The semantic value of complex sentence is then determined by the semantic values of the fundamental sentences. So, the truth of «2 + 2 = 4» and the falsity of «grass is red» determines (i.e., semantically grounds) the falsity of the conjunction «2 + 2 = 4 and grass is red», which in turn semantically grounds still more complex sentences. The interpretation of the semantic vocabulary is also determined by its application to the fundamental sentences: «2 + 2 = 4» semantically grounds the truth of «‘2 + 2 = 4’ is true», and so on.
Semantic grounding is intuitively connected to a less specific notion of grounding, metaphysical grounding. Metaphysical grounding has been employed to explicate various kinds of metaphysical relations, from explanatory relations to notions of fundamentality. Simplifying a little, one could say that a sentence, proposition, fact (or possibly a different kind of entity) p metaphysically grounds a sentence, proposition, fact (or possibly a different kind of entity) q if q obtains in virtue of p.
Metaphysical grounding and semantic grounding are intuitively seen to share several structural features. For instance, both semantic and metaphysical grounding are arguably strict partial orders, that is irreflexive, asymmetric, and transitive relations. Moreover, the logics of metaphysical grounding seem to share several features with the logics governing notions of semantically grounded truth (Fine 2010). Therefore, if semantic grounding and metaphysical grounding are related notions, it seems desirable to give them a unified treatment. For one thing, this could help to determine precisely which are the relations between them, e.g. in deciding whether one notion is to be reduced to the other. For another, a unified treatment of semantic and metaphysical grounding could unify the ground-theoretic talk that is found in very different areas of philosophy, i.e. the investigation of semantic paradoxes and self-applicable semantic notions on one hand, and the debates surrounding explanations and fundamentality on the other.
In this work, we aim at proving a first step towards unifying semantic and metaphysical grounding. We investigate a core notion of grounding, whose characteristics are arguably shared by both its semantic and its metaphysical realizations. There are at least two distinct approaches to presenting the core of semantic and metaphysical grounding. In the first place, we investigate a grounded notion of reasoning, i.e. the inference patterns that are licensed by ultimately grounded truths and falsities. We show that grounded reasoning requires a substructural, non-reflexive logic, that must replace the fully structural characterizations of inferences in a grounded setting, such as the one proposed by Halbach and Horsten (2006). With such a non-reflexive logic, it is possible to fully internalize the groundedly valid inferences, expressing them as sentences of the object language. In the second place, we investigate core grounding itself: we provide both an idealized model to determine the truth-conditions of sentences of the form «‘φ’ grounds ‘ψ’», and a calculus to determine the principles that core grounding respects. By contrast, the relation of core grounding is characterized via a fully irreflexive logic.
In this talk I would like to compare two concepts which both (i) relate to epistemic markers of explanation and (ii) found applications in computer programs and formal models of computation. The first concept is a feature desired in the field of machine learning models (MLMs): interpretability of decisions implied by such models. I will describe the motivation for seeking interpretability of in MLMs and give examples of interpretable MLMs, such as decision trees, decision rules and linear regression (cf. Molnar, 2009). Then I will discuss known concepts of interpretability and their epistemic value in providing an explanation. This will lead to the introduction of the second concept, namely reliability as approached in formal learning theory (FLT). Instead of trying to explicate what reliability means in the context of learning new information, FLT attempts to shift the focus towards the more attainable task: determining which precise senses of reliability can be achieved for a certain specified learning problem (Kelly, 2001). Potentially related to such tasks in FLT are model-independent (or: model-agnostic) methods for interpreting the black box models, such as, e. g., using feature importance or explaining individual predictions using solution concepts like Shapley values from cooperative game-theory (cf. Gul, 1989).
The theory of grounding has come to be the framework of choice for modeling metaphysical explanation and dependence. It is routine to characterize grounding by way of postulates constraining its logic. The aim of the present paper is twofold: firstly, it will be shown that a subset of those postulates is incompatible with a minimal characterization of metaphysical modality; then, I will consider a number of strategies aimed at reconciling ground and modality. The reconciliation, as it turns out, is possible, but it imposes very specific constraints on the underlying logic.