Here you can have a look at the projects and main researchers involved in the ForMat seminar!
Epistemic and semantic commitments of foundational theories
The principal objective of the project is to provide a deep conceptual and formal analysis of the notion of commitments of a foundational theory, where the latter expression stands for a theory that can develop a significant portion of mathematics. The notion of a commitment is essentially involved in many discussions in contemporary formal philosophy. The project focuses on the following two types of commitments.
• Epistemic commitments of a theory Th: sentences of the language of Th (possibly with the truth predicate added) that should be accepted once we accept the axioms and the inference rules of Th.
• Semantic commitments of a theory Th: restrictions on possible interpretations of Th imposed by the axioms and the deductive machinery of Th.
A typical example of an epistemic commitment of a theory Th is the consistency statement: it has been claimed that when you accept Th, you should also accept that Th is consistent, even though by Gödel’s second incompleteness theorem, the consistency of Th cannot be proved in Th itself. Another example is the statement that all theorems of Th are true. Semantic commitments differ from the epistemic ones in that we do not require that they can be described in the language of Th (even enriched with the truth predicate). A description of such commitments involves explaining how our specific choice of axioms restricts the class of possible interpretations (or models) of Th. The project consists of the following main three tasks.
Task 1: Providing a general characterisation of the scope and the source of epistemic commitments.
Task 2: Analyzing the truth axioms and the epistemic commitments they generate.
Task 3: Analyzing the truth axioms and the semantic commitments they generate.
In my project "Higher-order logics and absoluteness phenomena in set theory," I am investigating what absoluteness phenomena in set theory imply about the foundational status of second- and other higher-order logics. Since we can reduce crucial parts of mathematics to the validities of that logic, it seems that mathematics does not require us to make any assumptions other than what is required in logic alone.
Unfortunately, it is a problematic issue whether second- and higher-order logic comes with assumptions of its own. First of all, it mentions sets and relations, so it seems that we need to assume at least that these exist. We may argue that the existence of sets and relations does not require us to make any other ontological commitments than the existence of the elements of these sets. The situation is more problematic, though. The validities of second-order logic depend on the underlying set-theoretic assumptions. Therefore, it seems that any reduction of mathematics to higher-order logic is inherently circular: since the validities of logic change as we change the background set theory, there is not point in reducing the latter to the former.
However, there seems to be an important group of formal results which seem relevant to this discussion in philosophy and which were not a subject of thorough philosophical study thus far. It turns out that under strong enough set-theoretic assumptions about large cardinals, a crucial part of second-order logic, second-order arithmetic, becomes fixed under further modification to the set-theoretic universe. In other words, the sensitivity of second-order arithmetic to the set-theoretic assumptions only appears if these assumptions are weak enough. In my project, I am analysing the implications of these results to the philosophical status of higher-order logic and analyse their formal ramifications.
Meta-theoretic and Inter-theoretic Reductions in the Foundations of Mathematics (MIRFoM)
The MIRFoM project investigates two types of reductions that are ubiquitous in the foundations of mathematics and formal philosophy. The first type involves reducing informal concepts to formal theories. Such a formalization is the first tacit move of almost any reasoning in foundational studies. However, what makes a formal theory a good formalization of an informal concept? Are there theories that precisely grasp a specific mathematical domain? The second type consists of reductions between formal theories. What philosophical corollaries can we draw from the existence of a particular reduction between theories T and S? And, in connection to the first part of our study, which inter-theoretical reductions preserve the conceptual content of a theory?
Mathematical justification and entitlement in the foundations of mathematics
The notion of proof is integral to the concept of justification in mathematics, and is at the core of Philosophy of Mathematics and Epistemology. However, the concept of proof provides only an incomplete picture of mathematical justification: Epistemic practices such as mathematics have cornerstones, preconditions or presuppositions, which are essential for the practice's epistemic integrity. That proofs in our best theories can only provide a partial Epistemology is not only a philosophical, but also a mathematical fact: Since the discovery of the Incompleteness Theorems due to Kurt Gödel, philosophers, logicians and mathematicians must face not only a logical and mathematical, but also an epistemic incompletability of mathematics: No consistent mathematical theory can justify all of mathematics. The incompletability of mathematics poses the issue of justifying mathematical cornerstones. Examples of mathematical cornerstones are principles expressing fundamental properties of the most basic mathematical concepts or principles expressing the reliability of our best mathematical theories.
There has been a shift in epistemology towards the idea that mathematical cornerstones, or basic mathematical axioms and concepts, can only be entitled. Crucially, entitlement behaves quite differently in contrast to more traditional mathematical justifications: Entitlement is a default type of justification, which is never the result of some cognitive, evidential work. Importantly, entitlements can be defeated or undercut. Entitlement drastically changes the traditional epistemological landscape and, if successfully integrated into our epistemology of mathematics, it has the potential to provide a more complete picture of mathematical practices and to provide a more fine-grained conceptual framework for the epistemology of mathematics. Yet, the notion of mathematical entitlement must still be fully developed. Moreover, no attempt has been made towards a systematic integration of entitlement into the standard proof-based epistemology. Little is known about the normative force and epistemic role of entitlement. Moreover, we know little about the impact of entitlement for the question of the objectivity of mathematics.
The projects main goal is to make substantial progress towards a better understanding of entitlement and its integration in our proof-based epistemology. The project thrives towards its aim by pursuing three objectives:
(1) Determine the normative force of entitlement as justification
(2) Evaluate the epistemic role of entitlement for mathematical justification (3) Analyse the impact of entitlement for the objectivity of mathematics
Computable structure theory and the philosophy of mathematical structuralism
Dariusz Kalociński is the PI of the project "Computable structure theory and the philosophy of mathematical structuralism", funded by the grant agreement no. 2023/49/B/HS1/03930, National Science Centre Poland. The project explores theoretical problems in computable structure theory and their philosophical underpinnings inspired by mathematical structuralism.
First project:
The main objective of our project is to provide a better understanding of the asymmetry between salient (or natural) and artificial (or ad hoc) theories in the foundations of mathematics. The objects falling under our scrutiny, i.e. formal systems, are well-defined mathematical entities but the above mentioned salience criterion is pre-theoretical.
While from a formal perspective any set of sentences can be called a theory, intuitively each theory should be a theory of something: it should describe a real world phenomenon, capture the behaviour of a certain concept or at least fulfill one of these tasks to a certain reasonably high degree. Many theories studied in the foundations of mathematics have this property: they derive from our analysis of the structure of natural numbers (such as the canonical theory of natural numbers: Peano Arithmetic, PA), results in analysis (subsystems of Second Order Arithmetic), investigations into the concept of truth (Axiomatic Theories of Truth) or the hierarchy of sets (such as Zermelo-Fraenkel set theory, ZF).
We can ask whether there is any deeper, conceptual difference between natural and artificial theories? Is it just an accidental phenomenon that some formal systems attracted theorists’ attention or do they owe their salience to some deeper intrinsic feature? Are salient theories separable from the artificial ones by a formal criterion?
In the project the distinction between salient and artificial theories is approximated from three different angles. Firstly, we scrutinize the intuitive view that natural theories arise from comprehensive descriptions of salient mathematical structures, such as the natural numbers or fragments of the hierarchy of sets. Secondly, we analyze a philosophical property of formal systems, known as epistemic stability. Roughly speaking, an epistemically stable theory captures a standpoint in the philosophy of mathematics. By this we mean, that the theory is sound and complete with respect to the methods of developing mathematics according to the standpoint. Finally, the third dimension of our project consists in seeking coordinate-free characterizations of formal systems, that is complete descriptions of the given system that mention neither the language in which the system is formulated, nor its specific axiomatization.
Second project:
Formalizing observations and behavior is an integral part of science that allows us to use scientific methods to study natural phenomena. In mathematics, this is done by formulating axioms in suitable logics, such as first- order logic, which should capture the behavior of a structure or operation one wants to study. However, early results in mathematical logic tell us that this endeavor is bound to fail. It follows from Gödel’s completeness theorem that all but the most trivial axiomatizations, or theories, will have models besides the intended model whose behavior we set out to capture. Consider, for example, Peano arithmetic, the theory supposed to capture mathematics on the natural numbers; It has models with infinite numbers.
How complicated are these models? Can we easily distinguish them from our intended model? Moreover, how complicated is it to describe the elements of the model which exhibit non-intended properties? The novel idea behind our project is to use Scott analysis, which provides a robust mea- sure of structural complexity and is an established tool in descriptive set theory and computable structure theory, to answer such questions formally. The idea behind Scott analysis is to look at formulae in infinitary logic, which uniquely describe the given countable model (up to isomorphism). The complexity of the model is the least rank of a formula which is such an unambiguous character- ization. While Scott analysis has seen much use in the past, it has barely been used in the context of "foundational theories", theories that formalize large parts of everyday mathematics, such as Peano arithmetic, second-order arithmetic, or set theories such as Kripke-Platek or Zermelo-Fraenkel set theory.
This project aims to obtain a thorough analysis of the structural complexity of foundational theo- ries. Special focus will be put on whether foundational theories have "intended" models and how this intendedness can appear. We want to do Scott analysis of theories with foundational character and investigate whether Scott analysis and the use of infinitary logic are suitable to detect the intend- edness of models or other interesting properties of foundational theories. In order to approach this phenomenon at the right level of generality, we introduce the notion of a Scott function which mea- sures how many models of a given complexity the given theory has. Determining the Scott functions for foundational theories is one of the main scientific goals of the project.
Investigations of the questions outlined here will enable us to isolate new properties of first-order theories, such as for example the simplest model property, that will give new meaningful means for classifying foundational and other theories.