The conference has been postponed to 2022
Conventionalist routes to logical pluralism
Erik Stei
Conventionalism about logic can be characterized as the claim that the meaning of logical expressions is determined by the conventions or rules of the language in which they occur. On this view, logical definitions are not theories about ‘the world’—or about anything external for that matter—but conventions of language (cf. Quine 1966). The claim was, of course, a substantial aspect of logical positivism. Absent any empirical content, logical and mathematical truths could still be said to be necessary and a priori (see e.g. Ayer 1962). More recently, however, conventionalist ideas, taken mostly from Carnap (1937), have been used to motivate logical pluralism—the view that there is more than one correct logic. In the talk, I discuss conventionalist routes to logical pluralism (as discussed in Shapiro 2014 and defended in Kouri 2016, Kouri Kissel 2018). The hypotheses I defend are (i) that standard conventionalism only allows for uncontroversial versions of logical pluralism that even logical monists can happily accept; and (ii) that the pluralist will have to commit to substantial external claims that are alien to the conventionalist mindset in order to rule out monist interpretations of her view.
References
Ayer, A. J., 1962. Language, Truth and Logic. Victor Gollancz, London.
Carnap, R., 1937. Logical Syntax of Language. Reprint (2001). Routledge, London.
Kouri, T., 2016. A new interpretation of carnaps logical pluralism. Topoi.
Kouri Kissel, T., 2018. Logical pluralism from a pragmatic perspective. Australasian Journal of Philosophy
96, 578–91.
Quine, W. V., 1966. Truth by convention. In: The Ways of Paradox and Other Essays. Harvard University
Press, Cambridge, pp. 77–99.
Shapiro, S., 2014. Varieties of Logic. Oxford University Press, Oxford.
Cooperation and Determining when Merely Verbal Disputes are Worthwhile
Teresa Kouri Kissel
Merely verbal disputes are often thought to be disputes about language that are not worthwhile. Recent work suggests that some merely verbal disputes may not be problematic in this way (see, for example, Balcerak Jackson (2014), Belleri (2020) and Mankowitz (2021)). In this paper, I propose that this recent work misses one crucial point: interlocutors have to cooperate in the right kinds of ways in order for a dispute to be worthwhile. Using the question-under-discussion framework developed in Roberts (2012), I provide a form of cooperation which I show can distinguish between merely verbal disputes which are and are not worthwhile.
If this paper is correct that sometimes what makes disputes about language worthwhile is that the interlocutors are willing to cooperate in the right kinds of ways, then there is a critical upshot: interlocutors can control whether their dispute is worth their time. That is, if interlocutors decide to treat what each other is saying as true for the purposes of the conversation, or if they manage to come to some compromise about some things they are both willing to accept as true, then they can go from having a worthless dispute to having a worthwhile one.
The Omega Rule and the Categoricity Problem
Julien Murzi and Brett Topey
The omega rule is familiar enough: it allows one to move from the premises
φ(a), φ(b), φ(c), φ(d), …
to the conclusion
∀xφ(x)
As Tarski pointed out long ago, the rule is intuitively valid: if each object in an infinite domain is φ, then every object in that domain is φ. However, the rule is not recursive: finite beings cannot determine, even in principle, whether a putative instance of the rule is genuinely an instance. It is questionable, therefore, whether finite beings can follow the omega rule.
Jared Warren, though, has argued that ordinary speakers can and do follow that rule (2020, 2021). This fact, he claims, has key metasemantic consequences: it allows one to provide an inferentialist-friendly account of the determinacy of our mathematical theories. Indeed, James Garson (2013) and Constantin Brinçus (2022) have recently made a further metasemantic claim: that our (supposed) ability to follow the omega rule plays an essential role in explaining the categoricity of the quantifiers. As Garson and Brinçus point out, Carnap (1943) not only exhibited deviant interpretations of the propositional connectives – this is Carnap's Categoricity Problem, to which a number of solutions have been proposed (Carnap 1943, Smiley 1996, Garson 2013, Bonnay & Westerståhl 2016, Murzi & Topey 2021) – he also exhibited deviant interpretations of the quantifiers, which have been virtually ignored by philosophers and logicians alike. Crucially, Garson and Brinçus claim that Carnap's deviant interpretations of the quantifiers, as well as other deviant interpretations identified by Garson, can only be ruled out on the assumption that we can follow the omega rule. As a result, Brinçus maintains, the account of the categoricity of the quantifiers proposed by Murzi & Topey (2021), according to which the rules for the quantifiers are entirely standard, is hopeless.
Here we argue, first, that there's no reason to think we can follow the omega rule, and second, that this isn't a significant problem: pace Garson, Brinçus, and Warren, the followability of that rule has no essential role to play in explaining either the categoricity of the quantifiers or the determinacy of arithmetic. We first show that the most promising argument in the literature for the omega rule's followability – Warren's argument, which purports to exhibit a case in which we actually use the rule to infer a generalization from infinitely many instances – is unconvincing: there's conclusive reason to think that, in the case in question, we're in fact reasoning from the generalization to the instances rather than the other way around. We then show that the inferentialist-friendly account of the categoricity of the quantifiers (of finite order) provided in Murzi & Topey 2021 is robust enough to rule out both Carnap's and Garson's deviant interpretations, despite Brinçus's argument to the contrary. Finally, we explain that this result allows for an inferentialist-friendly account of the determinacy of our mathematical theories: the categoricity of second-order arithmetic is an immediate consequence of the categoricity of the second-order quantifiers, and determinacy follows from categoricity, despite Warren's mistrust of model-theoretic results.
An Argument from Proof Theory Against Implicit Conventionalism
Rea Golan
Conventionalism about logic is the view that logical principles hold in virtue of some linguistic conventions. According to explicit conventionalism, these conventions have to be stipulated explicitly. Explicit conventionalism is subject to a famous criticism by Quine, who accused it of leading to an infinite regress. In response to the criticism, several authors have suggested reconstructing
conventionalism as implicit in our linguistic behavior. In this paper, drawing on a distinction from proof theory between derivable and admissible rules, I argue that implicit conventionalism faces the same criticism as explicit conventionalism. In particular, it fails to account for certain admissible yet underivable rules as valid by convention.
Inferentialism and the Admissibility of Rules
Brett Topey
For various metaphysical and epistemological reasons, a certain kind of use-theoretic view – a strongly inferentialist view, one on which the semantic (i.e., reference- and truth-theoretic) features of our expressions are fully determined by our dispositions to follow conventional rules of inference – is attractive. But it’s long been known that certain sets of rules – e.g., those governing Prior’s invented connective ‘tonk’ – are such that including them our language would lead to intuitively absurd results. Inferentialists’ usual response to this problem is to insist that they can avoid these results: there are constraints on what sets of I- and E-rules can legitimately be admitted into our language, the story goes, and the rules governing disruptive expressions like ‘tonk’ are defective and so inadmissible. I argue, though, that from an inferentialist perspective, there’s no genuine sense in which rules like the ‘tonk’ rules are defective, in which case there’s simply no inferentialist-friendly way to motivate the claim that such rules are inadmissible: anyone who endorses the sort of inferentialism under discussion here is thereby committed to a version of Carnap’s principle of tolerance. I then explain that, despite appearances, this doesn’t mean that inferentialism leads us into chaos, for the simple reason that actually adding disruptive expressions like ‘tonk’ to our language isn’t nearly as easy as it may seem. In fact, given the place that rules governing what Sellars calls language departure transitions must occupy in any language that includes anything like observation sentences, it turns out that, as long as we’re disposed to use our language to make empirical predictions, it’s guaranteed, without our having to take any particular precautions, that our language won’t contain ‘tonk’ or any other disruptive expressions.
The Metalinguistic Construal of Mathematical Propositions
Zeynep Soysal
Robert Stalnaker construes mathematical propositions as possible-worlds propositions about metalinguistic facts. In this talk, I aim to defend this metalinguistic construal of mathematical propositions against the commonly raised charge that it is counterintuitive. My defense will build on a conventionalist metasemantics for mathematics, as well as on some recent work I did to defend the metalinguistic construal against a different kind of internal problem that it faces.