Research
(with Richard Galvin). (forthcoming). Where do philosophers appeal to intuitions (if they do)?. Metaphilosophy.
PDF: here.
ABSTRACT: It might be that intuitions are central to philosophy, and it might be that this is true because when philosophers give case-based arguments for philosophical claims (in published philosophy), the case verdict is typically (a) an intuited proposition and (b) either left undefended or defended on the grounds that it is an intuited proposition. This paper remains neutral on these global issues, however, and instead focuses on whether there is a nontrivial (or many-membered) class of case-based arguments in philosophy in which the case verdict is defended by appeal to background beliefs and not on the grounds that it is an intuited proposition. The paper argues that the answer is affirmative by examining seven such arguments that are referred to as “paradigm cases” of case-based arguments in which the verdict is justified via an appeal to intuition.
(with Elliott Sober). (forthcoming). Purely probabilistic measures of explanatory power--a critique. Philosophy of Science.
PDF: here.
ABSTRACT: All extant purely probabilistic measures of explanatory power satisfy the following technical condition: if Pr(E | H1) > Pr(E | H2) and Pr(E | ~H1) < Pr(E | ~H2), then H1’s explanatory power with respect to E is greater than H2’s explanatory power with respect to E. We argue that any measure satisfying this condition faces three serious problems – the Problem of Temporal Shallowness, the Problem of Negative Causal Interactions, and the Problem of Non-Explanations. We further argue that many such measures face a fourth problem – the Problem of Explanatory Irrelevance.
(with Elliott Sober). (2021). Hypotheses that attribute false beliefs − a two part epistemology (Darwin + Akaike). Mind & Language, 36, 664-682.
PDF: here.
ABSTRACT: Why expect organisms that have beliefs to have false beliefs? And if an organism occasionally occupies a neural state that encodes a perceptual belief, how do you evaluate hypotheses about the state’s semantic content, where some of those hypotheses attribute beliefs that are sometimes false while others attribute beliefs that are always true? To address the first of these questions, we discuss evolution by natural selection. To address the second, we discuss a problem that is widely recognized in statistics, the problem of over-fitting, and use the Akaike Information Criterion to solve epistemological versions of the disjunction and distality problems.
(with Elliott Sober). (2021). Disjunction and distality: The hard problem for purely probabilistic causal theories of mental content. Synthese, 198, 7197-7230.
PDF: here.
ABSTRACT: The disjunction problem and the distality problem each presents a challenge that any theory of mental content must address. Here we consider their bearing on purely probabilistic causal (ppc) theories. In addition to considering these problems separately, we consider a third challenge – that a theory must solve both. We call this “the hard problem.” We consider 8 basic ppc theories along with 240 hybrids of them, and show that some can handle the disjunction problem and some can handle the distality problem, but none can handle the hard problem. This is our main result. We then discuss three possible responses to that result, and argue that though the first two fail, the third has some promise.
(with Michael Roche). (2021). Authority without privilege: How to be a Dretskean conciliatory skeptic on self-knowledge. Synthese, 198, 1071-1087.
PDF: here.
ABSTRACT: Dretske is a “conciliatory skeptic” on self-knowledge. Take some subject S such that (i) S thinks that P and (ii) S knows that she has thoughts. Dretske’s theory can be put as follows: S has a privileged way of knowing what she thinks, but she has no privileged way of knowing that she thinks it. There is much to be said on behalf of conciliatory skepticism (“CS” for short) and Dretske’s defense of it. We aim to show, however, that Dretske’s defense fails, in that (in part) if his defense of CS’s skeptical half succeeds, then his defense of CS’s conciliatory half fails. We thensuggest a potential way forward. We suggest in particular that the correct way of being a Dretskean conciliatory skeptic is to deny that S has a privileged way of knowing about her thoughts, but to grant that she is nonetheless an authority on her thoughts.
(with Elliott Sober). (2019). Inference to the Best Explanation and the Screening-Off Challenge. Teorema, 38, 121-142.
PDF: here.
ABSTRACT: We argue in Roche and Sober (2013) that explanatoriness is evidentially irrelevant in that Pr(H | O & EXPL) = Pr(H | O), where His a hypothesis, O is an observation, and EXPL is the proposition that if H and O were true, then H would explain O. This is a “screening-off” thesis. Here we clarify that thesis, reply to criticisms advanced by Lange (2017), consider alternative formulations of Inference to the Best Explanation, discuss a strengthened screening-off thesis, and consider how it bears on the claim that unification is evidentially relevant.
(with Elliott Sober). (2019). Discrimination-conduciveness and observation selection effects. Philosophers' Imprint, 19, No. 40, 1-26.
PDF: here.
ABSTRACT: We conceptualize observation selection effects (OSEs) by considering how a shift from one process of observation to another affects discrimination-conduciveness, by which we mean the degree to which possible observations discriminate between hypotheses, given the observation process at work. OSEs in this sense come in degrees and are causal, where the cause is the shift in process, and the effect is a change in degree of discrimination-conduciveness. We contrast our understanding of OSEs with others that have appeared in the literature. After describing conditions of adequacy that an acceptable measure of degree of discrimination-conduciveness must satisfy, we use those conditions of adequacy to evaluate several possible measures. We also discuss how the effect of shifting from one observation process to another might be measured. We apply our framework to several examples, including the ravens paradox and the phenomenon of publication bias.
(2019). Review of David Atkinson and Jeanne Peijnenburg’s Fading foundations: Probability and the regress problem (2017, Springer). Philosophical Quarterly, 69, 212-215.
PDF: here.
(2018). The perils of parsimony. Journal of Philosophy, 115, 485-505.
PDF: here.
ABTRACT: It is widely thought in philosophy and elsewhere that parsimony is a theoretical virtue in that if T1 is more parsimonious than T2, then T1 is preferable to T2, other things being equal. This thesis admits of many distinct precisifications. I focus on a relatively weak precisification on which preferability is a matter of probability, and argue that it is false. This is problematic for various alternative precisifications, and even for Inference to the Best Explanation as standardly understood.
(2018). Is evidence of evidence evidence? Screening-off vs. no-defeaters. Episteme, 15, 451-462.
PDF: here.
ABSTRACT: I argue elsewhere (Roche 2014) that evidence of evidence is evidence under screening-off. Tal and Comesaña (2017) argue that my appeal to screening-off is subject to two objections. They then propose an evidence of evidence thesis involving the notion of a defeater. There is much to learn from their very careful discussion. I argue, though, that their objections fail and that their evidence of evidence thesis is open to counterexample.
(2018). Foundationalism with infinite regresses of probabilistic support. Synthese, 195, 3899-3917.
PDF: here.
ABSTRACT: There is a long-standing debate in epistemology on the structure of justification. Some recent work in formal epistemology promises to shed some new light on that debate. I have in mind here some recent work by David Atkinson and Jeanne Peijnenburg, hereafter “A&P”, on infinite regresses of probabilistic support. A&P show that there are probability distributions defined over an infinite set of propositions {p1, p2, p3, …, pn, …} such that (i) pi is probabilistically supported by pi+1 for all i and (ii) p1 has a high probability. Let this result be “APR” (short for “A&P’s Result”). A&P oftentimes write as though they believe that APR runs counter to foundationalism. This makes sense, since there is some prima facie plausibility in the idea that APR runs counter to foundationalism, and since some prominent foundationalists argue for theses inconsistent with APR. I argue, though, that in fact APR does not run counter to foundationalism. I further argue that there is a place in foundationalism for infinite regresses of probabilistic support.
(with Tomoji Shogenji). (2018). Information and inaccuracy. British Journal for the Philosophy of Science, 69, 577-604.
PDF: here.
ABSTRACT: This paper proposes a new interpretation of mutual information (MI). We examine three extant interpretations of MI by reduction in doubt, by reduction in uncertainty, and by divergence. We argue that the first two are inconsistent with the epistemic value of information (EVI) assumed in many applications of MI: the greater is the amount of information we acquire, the better is our epistemic position, other things being equal. The third interpretation is consistent with EVI, but it is faced with the problem of measure sensitivity and fails to justify the use of MI in giving definitive answers to questions of information. We propose a fourth interpretation of MI by reduction in expected inaccuracy, where inaccuracy is measured by a strictly proper monotonic scoring rule. It is shown that the answers to questions of information given by MI are definitive whenever this interpretation is appropriate, and that it is appropriate in a wide range of applications with epistemic implications.
(2018). Is there a place in Bayesian confirmation theory for the Reverse Matthew Effect? Synthese, 195, 1631-1648.
PDF: here.
ABSTRACT: Bayesian confirmation theory is rife with confirmation measures. Many of them differ from each other in important respects. It turns out, though, that all the standard confirmation measures in the literature run counter to the so-called “Reverse Matthew Effect” (“RME” for short). Suppose, to illustrate, that H1 and H2 are equally successful in predicting E in that p(E | H1)/p(E) = p(E | H2)/p(E) > 1. Suppose, further, that initially H1 is less probable than H2 in that p(H1) < p(H2). Then by RME it follows that the degree to which E confirms H1 is greater than the degree to which it confirms H2. But by all the standard confirmation measures in the literature, in contrast, it follows that the degree to which E confirms H1 is less than or equal to the degree to which it confirms H2. It might seem, then, that RME should be rejected as implausible. Festa (2012), however, argues that there are scientific contexts in which RME holds. If Festa’s argument is sound, it follows that there are scientific contexts in which none of the standard confirmation measures in the literature is adequate. Festa’s argument is thus interesting, important, and deserving of careful examination. I consider five distinct respects in which E can be related to H, use them to construct five distinct ways of understanding confirmation measures, which I call “Increase in Probability”, “Partial Dependence”, “Partial Entailment”, “Partial Discrimination”, and “Popper Corroboration”, and argue that each such way runs counter to RME. The result is that it is not at all clear that there is a place in Bayesian confirmation theory for RME.
(2017). Explanation, confirmation, and Hempel's paradox. In K. McCain and T. Poston (Eds.), Best explanations: New essays on inference to the best explanation (pp. 219-241). Oxford: Oxford University Press.
PDF: here.
ABSTRACT: Hempel’s Converse Consequence Condition (CCC), Entailment Condition (EC), and Special Consequence Condition (SCC) have some prima facie plausibility when taken individually. Hempel, though, shows that they have no plausibility when taken together, for together they entail that E confirms H for any propositions E and H. This is “Hempel’s paradox”. It turns out that Hempel’s argument would fail if one or more of CCC, EC, and SCC were modified in terms of explanation. This opens up the possibility that Hempel’s paradox can be solved by modifying one or more of CCC, EC, and SCC in terms of explanation. I explore this possibility by modifying CCC and SCC in terms of explanation and considering whether CCC and SCC so modified are correct. I also relate that possibility to Inference to the Best Explanation.
(2017). Confirmation, increase in probability, and the likelihood ratio measure: A reply to Glass and McCartney. Acta Analytica, 32, 491-513.
PDF: here.
ABSTRACT: Bayesian confirmation theory is rife with confirmation measures. Zalabardo (2009) focuses on the probability difference measure, the probability ratio measure, the likelihood difference measure, and the likelihood ratio measure. He argues that the likelihood ratio measure is adequate but each of the other three measures is not. He argues for this by setting out three adequacy conditions on confirmation measures and arguing in effect that all of them are met by the likelihood ratio measure but not by any of the other three measures. Glass and McCartney (2015), hereafter “G&M”, accept the conclusion of Zalabardo’s argument along with each of the premises in it. They nonetheless try to improve on Zalabardo’s argument by replacing his third adequacy condition with a weaker condition. They do this because of a worry to the effect that Zalabardo’s third adequacy condition runs counter to the idea behind his first adequacy condition. G&M have in mind confirmation in the sense of increase in probability: the degree to which E confirms H is a matter of the degree to which E increases H’s probability. I call this sense of confirmation “IP”. I set out four ways of precisifying IP. I call them “IP1”, “IP2”, “IP3”, and “IP4”. Each of them is based on the assumption that the degree to which E increases H’s probability is a matter of the distance between p(H | E) and a certain other probability involving H. I then evaluate G&M’s argument (with a minor fix) in light of them.
(with Elliott Sober). (2017). Is explanatoriness a guide to confirmation? A reply to Climenhaga. Journal for General Philosophy of Science, 48, 581-590.
PDF: here.
ABSTRACT: We (2013, 2014) argued that explanatoriness is evidentially irrelevant in the following sense. Let H be a hypothesis, O an observation, and E the proposition that H would explain O if H and O were true. Then our claim is that Pr(H | O & E) = Pr(H | O). We defended this screening-off thesis (SOT) by discussing an example concerning smoking and cancer. Climenhaga (Philos Sci, forthcoming) argues that SOT is mistaken because it delivers the wrong verdict about a slightly different smoking-and-cancer case. He also considers a variant of SOT, called “SOT*”, and contends that it too gives the wrong result. We here reply to Climenhaga’s arguments and suggest that SOT provides a criticism of the widely held theory of inference called “inference to the best explanation”.
(with Michael Roche). (2017). Dretske on self-knowledge and contrastive focus: How to understand Dretske's theory, and why it matters. Erkenntnis, 82, 975-992.
PDF: here.
ABSTRACT: Dretske's theory of self-knowledge is interesting but peculiar and can seem implausible. He denies that we can know by introspection that we have thoughts, feelings, and experiences. But he allows that we can know by introspection what we think, feel, and experience. We consider two puzzles. The first puzzle, PUZZLE 1, is interpretive. Is there a way of understanding Dretske's theory on which the (potential) knowledge affirmed by its positive side is different than the (potential) knowledge denied by its negative side? The second puzzle, PUZZLE 2, is substantive. Each of the following theses has some prima facie plausibility: (a) there is introspective knowledge of thoughts, (b) knowledge requires evidence, and (c) there are no experiences of thoughts. It is unclear, though, that these claims form a consistent set. These puzzles are not unrelated. Dretske's theory of self-knowledge is a potential solution to PUZZLE 2 in that if Dretske's theory is correct, then (a), (b), and (c) are all true. We provide a solution to PUZZLE 1 by appeal to Dretske's early work in the philosophy of language on contrastive focus. We then distinguish between "Closure" and "Transmissibility", and raise and answer a worry to the effect that Dretske's theory of self-knowledge runs counter to Transmissibility. These results help to secure Dretske's theory as a viable solution to PUZZLE 2.
(2017). A condition for transitivity in high probability. European Journal for Philosophy of Science, 7, 435-444.
PDF: here.
ABSTRACT: There are many scientific and everyday cases where (a) each of Pr(H1 | E) and Pr(H2 | H1) is high and (b) it seems that Pr(H2 | E) is high. But high probability (or absolute confirmation) is not transitive and so it might be in such cases that (a) each of Pr(H1 | E) and Pr(H2 | H1) is high and (c) in fact Pr(H2 | E) is not high. There is no issue in the special case where the following condition, which I call “C1”, holds: H1 entails H2. This condition is sufficient for transitivity in high probability. But many of the scientific and everyday cases referred to above are cases where it is not the case that H1 entails H2. I consider whether there are additional (non-trivial) conditions sufficient for transitivity in high probability. I consider three candidate conditions. I call them “C2”, “C3”, and “C2&3”. I argue that C2&3, but neither C2 nor C3, is sufficient for transitivity in high probability. I then set out some further results and relate the discussion to the Bayesian requirement of coherence.
(with Elliott Sober). (2017). Explanation = unification? A new criticism of Friedman's theory and a reply to an old one. Philosophy of Science.
PDF: here.
ABSTRACT: According to Friedman’s (1974) theory of explanation, a law X explains laws Y1, Y2, …, Yn precisely when X unifies the Y’s, where unification is understood in terms of reducing the number of independently acceptable laws. Kitcher (1976) criticized Friedman’s theory but did not analyze the concept of independent acceptability. Here we show that Kitcher’s objection can be met by modifying an element in Friedman’s account. In addition, we argue that there are serious objections to the use that Friedman makes of the concept of independent acceptability.
(2016). Confirmation, increase in probability, and partial discrimination: A reply to Zalabardo. European Journal for Philosophy of Science, 6, 1-7.
PDF: here.
ABSTRACT: There is a plethora of confirmation measures in the literature. Zalabardo considers four such measures: PD (Probability-Difference), PR (Probability-Ratio), LD (Likelihood-Difference), and LR (Likelihood-Ratio). He argues for LR and against each of PD, PR, and LD. First, he argues that PR is the better of the two probability measures. Next, he argues that LR is the better of the two likelihood measures. Finally, he argues that LR is superior to PR. I set aside LD and focus on the trio of PD, PR, and LR. The question I address is whether Zalabardo succeeds in showing that LR is superior to each of PD and PR. I argue that the answer is negative. I also argue, though, that measures such as PD and PR, on one hand, and measures such as LR, on the other hand, are naturally understood as explications of distinct senses of confirmation.
(with Michael Roche). (2016). Review of Declan Smithies and Daniel Stoljar's (Eds.) Introspection and consciousness (2012, Oxford University Press). Philosophical Quarterly, 66, 203-208.
PDF: here.
(2015). Evidential support, transitivity, and screening-off. Review of Symbolic Logic, 8, 785-806.
PDF: here.
ABSTRACT: Is evidential support transitive? The answer is negative when evidential support is understood as confirmation so that X evidentially supports Y if and only if p(Y | X) > p(Y). I call evidential support so understood “support” (for short) and set out three alternative ways of understanding evidential support: support-t (support plus a sufficiently high probability), support-t* (support plus a substantial degree of support), and support-tt* (support plus both a sufficiently high probability and a substantial degree of support). I also set out two screening-off conditions (under which support is transitive): SOC1 and SOC2. It has already been shown that support-t is non-transitive in the general case (where it is not required that SOC1 holds and it is not required that SOC2 holds), in the special case where SOC1 holds, and in the special case where SOC2 holds. I introduce two rather weak adequacy conditions on support measures and argue that on any support measure meeting those conditions it follows that neither support-t* nor support-tt* is transitive in the general case, in the special case where SOC1 holds, or in the special case where SOC2 holds. I then relate some of the results to Douven’s evidential support theory of conditionals along with a few rival theories.
(2015). Review of Ted Poston's Reason and explanation: A defense of explanatory coherentism (2014, Palgrave Macmillan). Notre Dame Philosophical Reviews.
PDF: here.
(2014). A note on confirmation and Matthew properties. Logic & Philosophy of Science, XII, 91-101.
PDF: here.
ABSTRACT: There are numerous (Bayesian) confirmation measures in the literature. Festa provides a formal characterization of a certain class of such measures. He calls the members of this class “incremental measures”. Festa then introduces six rather interesting properties called “Matthew properties” and puts forward two theses, hereafter “T1” and “T2”, concerning which of the various extant incremental measures have which of the various Matthew properties. Festa’s discussion is potentially helpful with the problem of measure sensitivity. I argue, that, while Festa’s discussion is illuminating on the whole and worthy of careful study, T1 and T2 are strictly speaking incorrect (though on the right track) and should be rejected in favor of two similar but distinct theses.
(with Elliott Sober). (2014). Explanatoriness and evidence: A reply to McCain and Poston. Thought, 3, 193-199.
PDF: here.
ABSTRACT: We argue elsewhere that explanatoriness is evidentially irrelevant (Roche and Sober 2013). Let H be some hypothesis, O some observation, and E the proposition that H would explain O if H and O were true. Then O screens-off E from H: Pr(H | O & E) = Pr(H | O). This thesis, hereafter “SOT” (short for “Screening-Off Thesis”), is defended by appeal to a representative case. The case concerns smoking and lung cancer. McCain and Poston grant that SOT holds in cases, like our case concerning smoking and lung cancer, that involve frequency data. However, McCain and Poston contend that there is a wider sense of evidential relevance—wider than the sense at play in SOT—on which explanatoriness is evidentially relevant even in cases involving frequency data. This is their main point, but they also contend that SOT does not hold in certain cases not involving frequency data. We reply to each of these points and conclude with some general remarks on screening-off as a test of evidential relevance.
(with Michael Schippers). (2014). Coherence, probability and explanation. Erkenntnis, 79, 821-828.
PDF: here.
ABSTRACT: Recently there have been several attempts in formal epistemology to develop an adequate probabilistic measure of coherence. There is much to recommend probabilistic measures of coherence. They are quantitative and render formally precise a notion--coherence--notorious for its elusiveness. Further, some of them do very well, intuitively, on a variety of test cases. Siebel, however, argues that there can be no adequate probabilistic measure of coherence. Take some set of propositions A, some probabilistic measure of coherence, and a probability distribution such that all the probabilities on which A’s degree of coherence depends (according to the measure in question) are defined. Then, the argument goes, the degree to which A is coherent depends solely on the details of the distribution in question and not at all on the explanatory relations, if any, standing between the propositions in A. This is problematic, the argument continues, because, first, explanation matters for coherence, and, second, explanation cannot be adequately captured solely in terms of probability. We argue that Siebel’s argument falls short.
(2014). On the truth-conduciveness of coherence. Erkenntnis, 79, 647-665.
PDF: here.
ABSTRACT: I argue that coherence is truth-conducive in that coherence implies an increase in the probability of truth. Central to my argument is a certain principle for transitivity in probabilistic support. I then address a question concerning the truth-conduciveness of coherence as it relates to (something else I argue for) the truth-conduciveness of consistency, and consider how the truth-conduciveness of coherence bears on coherentist theories of justification.
(with Tomoji Shogenji). (2014). Confirmation, transitivity, and Moore: The Screening-Off Approach. Philosophical Studies, 168, 797-817.
PDF: here.
ABSTRACT: It is well known that the probabilistic relation of confirmation is not transitive in that even if E confirms H1 and H1 confirms H2, E may not confirm H2. In this paper we distinguish four senses of confirmation and examine additional conditions under which confirmation in different senses becomes transitive. We conduct this examination both in the general case where H1 confirms H2 and in the special case where H1 also logically entails H2. Based on these analyses, we argue that the Screening-Off Condition is the most important condition for transitivity in confirmation because of its generality and ease of application. We illustrate our point with the example of Moore's "proof" of the existence of a material world, where H1 logically entails H2, the Screening-Off Condition holds, and confirmation in all four senses turns out to be transitive.
(with Tomoji Shogenji). (2014). Dwindling confirmation. Philosophy of Science, 81, 114-137.
PDF: here.
ABSTRACT: We show that as a chain of confirmation becomes longer, confirmation dwindles under screening-off. For example, if E confirms H1, H1 confirms H2, and H1 screens off E from H2, then the degree to which E confirms H2 is less than the degree to which E confirms H1. Although there are many measures of confirmation, our result holds on any measure that satisfies the Weak Law of Likelihood. We apply our result to testimony cases, relate it to the Data-Processing Inequality in information theory, and extend it in two respects so that it covers a broader range of cases.
(2014). Evidence of evidence is evidence under screening-off. Episteme, 11, 119-124.
PDF: here.
ABSTRACT: An important question in the current debate on the epistemic significance of peer disagreement is whether evidence of evidence is evidence. Fitelson argues (persuasively in my view) that, at least on some renderings of the thesis that evidence of evidence is evidence, there are cases where evidence of evidence is not evidence. I introduce a “screening-off” condition and show that under this condition evidence of evidence is evidence.
(with Elliott Sober). (2013). Explanatoriness is evidentially irrelevant, or inference to the best explanation meets Bayesian confirmation theory. Analysis, 73, 659-668.
PDF here.
ABSTRACT: In the world of philosophy of science, the dominant theory of confirmation is Bayesian. In the wider philosophical world, the idea of inference to the best explanation exerts a considerable influence. Here we place the two worlds in collision, using Bayesian confirmation theory to argue that explanatoriness is evidentially irrelevant.
(2013). Coherence and probability: A probabilistic account of coherence. In M. Araszkiewicz and J. Savelka (Eds.), Coherence: Insights from philosophy, jurisprudence and artificial intelligence (pp. 59-91). Dordrecht: Springer.
PDF: here.
ABSTRACT: I develop a probabilistic account of coherence, and argue that at least in certain respects it is preferable to (at least some of) the main extant probabilistic accounts of coherence: (i) Igor Douven and Wouter Meijs’s account, (ii) Branden Fitelson’s account, (iii) Erik Olsson’s account, and (iv) Tomoji Shogenji’s account. Further, I relate the account to an important, but little discussed, problem for standard varieties of coherentism, viz., the "Problem of Justified Inconsistent Beliefs."
(2012). Witness agreement and the truth-conduciveness of coherentist justification. Southern Journal of Philosophy, 50, 151-169.
PDF: here.
ABSTRACT: Some recent work in formal epistemology shows that "witness agreement" by itself implies neither an increase in the probability of truth nor a high probability of truth—the witnesses need to have some "individual credibility." It can seem that, from this formal epistemological result, it follows that coherentist justification (i.e., doxastic coherence) is not truth-conducive. I argue that this does not follow. Central to my argument is the thesis that, though coherentists deny that there can be noninferential justification, coherentists do not deny that there can be individual credibility.
(2012). A reply to Cling's "The epistemic regress problem." Philosophical Studies, 159, 263-276.
PDF: here.
ABSTRACT: Andrew Cling presents a new version of the epistemic regress problem, and argues that intuitionist foundationalism, social contextualism, holistic coherentism, and infinitism fail to solve it. Cling’s discussion is quite instructive, and deserving of careful consideration. But, I argue, Cling’s discussion is not in all respects decisive. I argue that Cling’s dilemma argument against holistic coherentism fails.
(2012). Transitivity and intransitivity in evidential support: Some further results. Review of Symbolic Logic, 5, 259-268.
PDF: here.
ABSTRACT: Igor Douven establishes several new intransitivity results concerning evidential support. I add to Douven’s very instructive discussion by establishing two further intransitivity results and a transitivity result.
(2012). A weaker condition for transitivity in probabilistic support. European Journal for Philosophy of Science, 2, 111-118.
PDF: here.
ABSTRACT: Probabilistic support is not transitive. There are cases in which x probabilistically supports y, i.e., Pr(y | x) > Pr(y), y, in turn, probabilistically supports z, and yet it is not the case that x probabilistically supports z. Tomoji Shogenji, though, establishes a condition for transitivity in probabilistic support, that is, a condition such that, for any x, y, and z, if Pr(y | x) > Pr(y), Pr(z | y) > Pr(z), and the condition in question is satisfied, then Pr(z | x) > Pr(z). I argue for a second and weaker condition for transitivity in probabilistic support. This condition, or the principle involving it, makes it easier (than does the condition Shogenji provides) to establish claims of probabilistic support, and has the potential to play an important role in at least some areas of philosophy.
(2011). Coherentism and inconsistency. Southwest Philosophy Review, 27, 185-193.
PDF: here.
ABSTRACT: If a subject’s belief system is inconsistent, does it follow that the subject’s beliefs (all of them) are unjustified? It seems not. But, coherentist theories of justification (at least some of them) imply otherwise, and so, it seems, are open to counterexample. This is the “Problem of Justified Inconsistent Beliefs”. I examine two main versions of the Problem of Justified Inconsistent Beliefs, and argue that coherentists can give at least a promising line of response to each of them.
(2011). Is coherentism inconsistent? Southwest Philosophical Studies, 33, 84-90.
PDF: here.
ABSTRACT: Can a perceptual experience justify (epistemically) a belief? More generally, can a nonbelief justify a belief? Coherentists answer in the negative: Only a belief can justify a belief. A perceptual experience can cause a belief but cannot justify a belief. Coherentists eschew all noninferential justification—justification independent of evidential support from beliefs—and, with it, the idea that justification has a foundation. Instead, justification is holistic in structure. Beliefs are justified together, not in isolation, as members of a coherent belief system. The main question I consider is: Is coherentism consistent? I set out an apparent inconsistency in coherentism. I then give a resolution to this apparent inconsistency.
(2010). Coherentism, truth, and witness agreement. Acta Analytica, 25, 243-257.
PDF: here.
ABSTRACT: Coherentists on epistemic justification claim that all justification is inferential, and that beliefs, when justified, get their justification together (not in isolation) as members of a coherent belief system. Some recent work in formal epistemology shows that “individual credibility” is needed for “witness agreement” to increase the probability of truth and generate a high probability of truth. It can seem that, from this result in formal epistemology, it follows that coherentist justification is not truth-conducive, that it is not the case that, under the requisite conditions, coherentist justification increases the probability of truth and generates a high probability of truth. I argue that this does not follow..
(2006). Can a coherentist be an externalist? Croatian Journal of Philosophy, 6, 269-280.
PDF: here.
ABSTRACT: It is standard practice, when distinguishing between the foundationalist and the coherentist, to construe the coherentist as an internalist. The coherentist, the construal goes, says that justification is solely a matter of coherence, and that coherence, in turn, is solely a matter of internal relations between beliefs. The coherentist, so construed, is an internalist (in the sense I have in mind) in that the coherentist, so construed, says that whether a belief is justified hinges solely on what the subject is like mentally. I argue that this practice is fundamentally misguided, by arguing that the foundationalism/coherentism debate and the internalism/externalism debate are about two very different things, so that there is nothing, qua coherentist, precluding the coherentist from siding with the externalist. I then argue that this spells trouble for two of the three most pressing and widely known objections to coherentism: the Alternative-Systems Objection and the Isolation Objection.