Papers & Talks
The links on this page give the ability to view, but if you want to download, there is a link to a further page at the bottom of this one, which has downloadable links.
"Unpicking Priest's Bootstraps", Thought (forthcoming).
Abstract: Graham Priest has argued that the fruits of classical set theory can be obtained by naive means through a puzzling piece of reasoning often known as the bootstrapping argument [Priest, 2006]. I will demonstrate that the bootstrapping involved is best understood as viciously circular and thus, that these fruits remain forbidden. The argument has only one rehearsal in print and it is quite subtle. This paper provides reconstruction of the argument based on [Priest, 2006] and attempts some fixes and alternative construals to get around some elementary problems. Despite these efforts the argument remains unconvincing.
Abstract: It is a commonplace of set theory to say that there is no set of all well-orderings nor a set of all sets. We are implored to accept this due to the threat of paradox and the ensuing descent into unintelligibility. In the absence of promising alternatives, we tend to take up a conservative stance and tow the line: there is no universe [Halmos, in: Naive set theory, 1960]. In this paper, I am going to challenge this claim by taking seriously the idea that we can talk about the collection of all the sets and many more collections beyond that. A method of articulating this idea is offered through an indefinitely extending hierarchy of set theories. It is argued that this approach provides a natural extension to ordinary set theory and leaves ordinary mathematical practice untouched.
"Infinitary Tableau for Semantic Truth", Review of Symbolic Logic. 8, 2, pp2017-235.
Abstract: We provide infinitary proof theories for three common semantic theories of truth:
strong Kleene, van Fraassen supervaluation and Cantini supervaluation. The value of these
systems is that they provide an easy method of proving simple facts about semantic theories. Moreover we show that they also give us a simpler understanding of the computational complexity of these definitions and provide a direct proof that the closure ordinal for Kripke’s definition is the supremum of the recursive ordinals (\omega_1^CK).
This work can be understood as an effort to provide a proof-theoretic counterpart to
Welch’s game-theoretic approach.
"Naive Infinitism", Notre Dame Journal of Formal Logic. 56, 1, p191-212.
Abstract: This paper expands upon a way in which we might rationally doubt
that there are multiple sizes of inﬁnity. The argument draws its inspiration from
recent work in the philosophy of truth and philosophy of set theory. More specif-
ically, elements of contextualist theories of truth and multiverse accounts of set
theory are brought together in an effort make sense of Cantor’s troubling theo-
rem. The resultant theory provides an alternative philosophical perspective on
the transﬁnite, but has limited impact on everyday mathematical practice.
"Fixed Points for Consequence Relations", Logique et Analyse. 57, 227, p333-357.
Abstract: This paper provides a way of dealing with paradoxes associated with consequence
relations via a generalisation of Kripke’s fixed point technique. In particular, we focus on
Beall and Murzi’s paradox, although the framework outlined should have more general application. We first attempt to locate this problem among the existing corpus of semantic paradoxes. We then examine two canonical approaches to the issue and conclude with an inductive construction which, in some fashion, goes beyond those approaches.
second order logic and the philosophical conclusions that can be drawn from them. We provide a way of seeing this result, so to speak, through a first order lens divested of its second order garb. Our purpose is to draw into sharper relief exactly what is involved in
this kind of categoricity proof and to highlight the fact that we should be reserved before drawing powerful philosophical conclusions from it.
Abstract: Leitgeb provides a theory of truth which is based on a theory of semantic dependence. We argue here that the conceptual thrust of this approach provides us
with the best way of dealing with semantic paradoxes in a manner that is acceptable
to a classical logician. However, in investigating a problem that was raised by Leitgeb,
we discover that something is missing from Leitgeb’s original deﬁnition. Moreover, we
show that once the appropriate repairs have been made, the resultant definition is
equivalent to a version of the supervaluation deﬁnition suggested by Kripke. The
upshot of this is a philosophical justiﬁcation for the simple supervaluation approach
Abstract: I provide a tableau system and completeness proof for a revised version
of Carnap's semantics for quantified modal logic. For Carnap, a sentence is possible if
it is true in some first order model. However, in a similar fashion to second order logic,
no sound and complete proof theory can be provided for this semantics. This factor
contributed to the ultimate disappearance of Carnapian modal logic from contemporary
philosophical discussion. The proof theory I discuss comes close to Carnap's semantic
vision and provides an interesting counterpoint to mainstream approaches to modal logic.
Despite its historical origins, my intention is to demonstrate that this approach to modal
logic is worthy of contemporary attention and that current debate is the poorer for its
"The Generic Multiverse Debate", (in preparation).
"Non-classical Computability Theory", (in preparation with Zach Weber).
"Diagonal Theory: toward a classification of diagonal arguments", (in preparation).
"Modality without Metaphysics", PhD thesis.
Recent & Upcoming Talks
"What's so Natural about Forcing?" Philosophy of Mathematics and Logic Conference, Oxford University, June 2015.
"Diagonal Descartes & the Architecture of Doubt", Philosophy of Mathematics and Logic Workshop, University of St Andrews, June 2015.
"What's so Natural about Generic Extensions?" Discourse and Philosophy Colloquium, Institute for Logic, Language and Computation (ILLC), Amsterdam, May 2015.
"What are foundations and what are they good for?" 2nd Symposium on the Foundations of Mathematics, Birkbeck University, January 2015 (keynote speech).
"Sifting through the wreckage: some thoughts on the foundations of mathematics", University of St Andrews and University of Glasgow, October and November 2015.
"The Generic Multiverse Debate," 1st Symposium on the Foundations of Mathematics, Kurt Gödel Research Center, Vienna, July 2014.
"Inclosure Inverted: a classical perspective," University of St Andrews, June 2013.
"Infinitary Logic and Truth," Midlands Logic Seminar, University of Birmingham, March 2013.
"Semantic Theories of Truth", University of Bristol, November 2012.
"Sets, Supersets and Closure", Oxford, November 2012.
Abstract: In the philosophy of set theory, there is often a reticence to take large collections like the set of all sets seriously as collections over which we may quantify. They are relegated to the proper-class or plurality corner as either metalinguistic gloss or ontological oddity. In this paper, I take large collections seriously as set-like objects over which we may quantify and which satisfy axioms very like the ones we expect of ordinary sets. I sketch a way in which we might formalise this intuition and I call theories that do this, superset theories. I then argue there are good reasons why mathematicians should not be interested in such theories, but philosophers, on the other hand, should be.
Abstract: The purpose of this paper is to lay out some groundwork for a general understanding of the relationships between: notions of dependence and groundedness; and notions of complexity and cardinality. We commence by sketching a pleasing framework linking dependence with the theory of inductive definitions. We pose a kind of problem for this framework and attempt to explain why this is interesting using a case study from the semantic theory of truth literature. We then attempt to formulate a definition of local dependence which addresses this issue. Finally, we consider limitations of this definition an consider possibilities for its further generalisation.
Abstract: The goal of this paper is three-fold. Its message is largely conciliatory. First, I would like to argue that a third approach to truth has been omitted: logics of truth. Second, I shall argue that while there is a narrow sense in which these approaches exclude each other, we would be better off understanding their distinction in terms of different research programmes with very different motivations. As such, the question of any preference between them is best understood as a philosophical question about what we want to do with a truth predicate. I shall argue that Tarski’s undefinability theorem is the cause of this fracture and that rather than attempting to demonstrate the weakness of alternative programmes, we should let each programme flourish it own way. Indeed, we should be willing to use the results of one programme to further that of another. Third, I shall illustrate an example of this kind of cross-fertilisation by attempting to show how semantic theories of truth may be used to provide a uniform guide the construction of axiomatic theories.
"A Plea for Complexity Considerations", University of St Andrews, December 2011.
"Naive Infinitism", University of St Andrews, December 2011.
"Infinitary Proofs for Semantic Truth", University of Bristol, November 2011.
"Forcing for Philosophical Logicians", University of Bristol, November 2011.
Abstract: Since the Cantorian origins of set theory, the continuum hypothesis (CH) has posed a fundamental stumbling block for our attempts to grapple with the infinite. Perhaps the greatest step forward with respect to this question came in 1963 when Paul Cohen demonstrated the independence of CH from ZFC by providing a model in which it is false. The construction of the model involves the adjunction of a new, generic object via the technique of forcing. Since then, forcing has revolutionised research in set theory and has become a standard tool in the set theorist's kit. Despite this, there remains a certain enigma to the technique: it is difficult to explain why it works. In this respect, it contrasts with other breakthroughs in mathematical logic from the twentieth century. For example, while Godel's original completeness proof for first order logic is somewhat opaque; Henkin's proof is a paragon of transparency. It is easy to see how the strategy lines up with our initial goals.
The goal of this paper is to provide a philosophically satisfying explanation of why forcing works and in so doing an analysis of what a generic set is. We shall do this by providing a high level analysis of the goals of some famous theorems which employ the forcing technique. From here, we shall distil out the core goals of the proof. We shall then provide a kind of narrative which will commence with some abortive solutions and then use the lessons learned from them to zero in on an approach that does work: forcing. With this in place, we shall then look at some things which a generic set is not. Finally, we shall sum up with a general account of forcing and generic objects.
"Truth, Dependence and Supervaluation", Logic, Methodology and Philosophy of Science, July 2011.
"Proofs for Truth", Munich Centre for Mathematical Philosophy, Ludwig Maximilian University of Munich, July 2011.
"The very idea of a Foundation: sometimes it's good to put a round peg in a square hole", St Andrews Philosophical Society, University of St Andrews, April 2011.
"Supervaluation and Strong Kleene: some grounded remarks", Paradox and Logical Revision workshop, University of St Andrews, April 2011.
"Categoricity ... not so categorical", Plural, Predicates and Paradox project, Birkbeck College, London, February 2011.