Below you'll find our course schedule along with links to the readings for each class session.
The week's mandatory readings are to be completed before that week's session. You're not expected to complete the optional readings, but you can take a look at them if you're interested – they'll provide additional context for the week's topic of discussion. You may also find them useful if you decide to write your paper on that topic.
Some of the readings are from Michael G. Titelbaum's forthcoming textbook Fundamentals of Bayesian Epistemology, a draft of which I've sent to you. (If you haven't received it, please let me know.) The rest are linked below. To access any of the linked readings, use the course password I've sent to you. (Where possible, I've also provided links to versions that aren't password-protected; most of these will need to be accessed from a campus address.)
If you're participating in the course remotely, you can join each session via the Webex link in the menu panel of our course's Blackboard page.
Note: our schedule is subject to revision as the semester progresses, but I'll keep this version current. I'll also post class notes after each week's session.
Optional reading:
Fundamentals of Bayesian Epistemology (FBE), Chapter 1: "Beliefs and degrees of belief"
Lecture notes: Week 1
Mandatory reading:
David Christensen, "Higher-order evidence", 2010 (also available here)
Optional reading:
Andy Egan & Adam Elga, "I can't believe I'm stupid", 2005 (also available here)
Questions for reflection
After presenting some cases that involve higher-order evidence (HOE), Christensen introduces, in Section 2, the question of whether we should really take this sort of evidence to be relevant to our beliefs about the world. In discussing this question, he points out some ways in which HOE is different from ordinary evidence. One of these ways is that HOE is, in a particular sense, agent-relative. What does this come to, exactly? (The other way, Christensen says, is that HOE somehow forces agents to "embody a kind of epistemic imperfection". We'll return to this below.)
On p. 191 Christensen introduces a hypothetical "cognitively perfect" agent: an agent who has perfect grasp of all evidential support relations and so always responds maximally rationally to her evidence. For what purpose, exactly, does he introduce this character? What is the point he's making here?
In Section 3 Christensen explores how defeat by HOE is different from standard cases of undercutting defeat. His diagnosis involves what he calls "bracketing". What does he mean by this? What's the diagnosis, exactly, and how does it relate to Christensen's claim that HOE forces agents to embody epistemic imperfection?
In Section 5 Christensen considers an alternative way of representing the significance of HOE, one on which no bracketing seems to be involved. He claims that this picture isn't really inconsistent with the bracketing picture -- they're just two different ways of describing the same phenomenon. He then argues that the alternative picture is worse, in the sense that it obscures some of what's distinctive about HOE. In particular, it obscures the kind of epistemic imperfection Christensen takes HOE-affected beliefs to embody. Why does he think the alternative picture obscures this? What is going on here, exactly?
Discussion chair: Brett Topey
Lecture notes: Week 2
Mandatory reading:
FBE, Chapter 2: "Probability distributions"
Lecture notes: Week 3
Solutions: How to prove Equivalence and Decomposition
Problem set 1 due at start of class
Mandatory reading:
FBE, Chapter 3: "Conditional credences"
Note: This chapter presents and explains some important implications of the Ratio Formula. You should work carefully through all the material up to the start of Section 3.3 (on p. 77). Section 3.3 itself is worth reading, but we won't focus on it.
Lecture notes: Week 4
Mandatory reading:
Maria Lasonen-Aarnio, "Higher-order evidence and the limits of defeat", 2014 (also available here)
Optional reading:
Maria Lasonen-Aarnio, "Enkrasia or evidentialism? Learning to love mismatch", 2020 (also available here)
Questions for reflection
Lasonen-Aarnio presents a puzzle for the principle she calls Higher-order defeat -- i.e., the thesis that good evidence to the effect that one's belief state was produced by a (rationally) flawed process has the power to defeat that belief state, to render it irrational. The source of the puzzle, she says, arises from Higer-order defeat along with the following premise:
Whatever property a belief state must have in order for it to count as rational -- call it the "justification-conferring property" -- it is possible for an agent's belief state to in fact have that property while, at the same time, the agent has good evidence that her belief state lacks the justification-conferring property.
Endorsing both of these theses (Higher-order defeat and the above premise), according to Lasonen-Aarnio, commits one to adopting a "two-tiered theory of justification". So:
What is a two-tiered theory of justification, exactly?
Why does endorsing these two theses commit one to such a theory?
What, exactly, is the puzzle that is supposed to arise here? The basic idea is that, according to a two-tiered theory, an agent can find herself in rational dilemmas: situations in which there's no belief state she's rationally permitted to be in. Why is this the case?
Given this puzzle, there are only three options: reject Higher-order defeat, reject the above premise, or accept the conclusion that higher-order evidence gives rise to rational dilemmas. Lasonen-Aarnio endorses the first option and argues against the other two. So:
What, exactly, is her worry about the view that higher-order evidence gives rise to rational dilemmas? In Section 5 she says that proponents of Higher-order defeat, even if they accept that there are rational dilemmas here, will still want to say that there's something that the agent who receives higher-order evidence ought to do, and she suggests that this is going to lead them to an unstable position. Why?
In Section 4 she considers adopting the "Über-rule view" as a way of rejecting the above premise and so avoiding a two-tiered theory of justification. How does this view work, and what does she take to be wrong with it?
How, exactly, is the Über-rule view supposed to be related to the "hierarchy view" she discusses in Section 6?
Discussion chair: Simon Fischer (simon[dot]fischer2[at]stud[dot]sbg[dot]ac[dot]at)
Lecture notes: Week 5
Mandatory reading:
Michael G. Titelbaum, "Rationality's fixed point (or: In defense of Right Reason)", 2015 (also available here)
Optional reading:
Clayton Littlejohn, "Stop making sense? On a puzzle about rationality", 2018 (also available here)
Questions for reflection
Titelbaum uses what he calls the Akratic Principle -- roughly, the thesis that one is never rationally permitted to be in a state that contains a particular attitude and also contains the belief that that attitude is rationally forbidden in one's current situation -- to argue for the Fixed Point Thesis -- roughly, the thesis that one is never rationally permitted to have a false belief about rational requirements (i.e., a false belief about what doxastic states are permitted in what situations). The structure of the argument is as follows.
First, he gives two arguments from the Akratic Principle to what he calls the Special Case Thesis -- roughly, the thesis that one is never rationally permitted, in a situation in which a particular attitude is in fact rationally required, to believe that that attitude is rationally forbidden. Then he argues that there just is no principled picture on which the Special Case Thesis is true but the more general Fixed Point Thesis is not.
In exactly what ways is the Fixed Point Thesis a generalization of the Special Case Thesis? (I mentioned this in the lecture.)
The first of Titelbaum's two arguments for the Special Case Thesis, in Section 3, is a simple logical argument that shows that the Akratic Principle does indeed entail the Special Case Thesis. So why does he give a second argument in Section 4? What is its purpose in his discussion?
Thinking about the following may help with the above question. In his second argument for the Special Case Thesis, he discusses some simple rules that, he says, can't be genuine rules of rationality because, if they were, they would be self-undermining.
What does he mean by this? How does his "Self-Undermining Argument" work?
He introduces "Restricted" versions of those rules and argues that they can't be genuine rules of rationality either -- they don't fully solve the self-undermining problem. Why don't they? What is the problem that still remains?
Bonus question: suppose there is only one genuine rule of rationality, the Über-rule. Can there be a "Restricted" version of such a rule? If so, does there remain any self-undermining problem here?
He then claims that there are two versions of the rules that can fully solve the self-undermining problem: "Properly Restricted" versions and "Current-Situation" versions. What is the difference here, and why does he think only the "Properly Restricted" versions are acceptable?
Finally, how are the "Properly Restricted" versions supposed to get us from the Special Case Thesis to the full Fixed Point Thesis?
In Section 5 Titelbaum distinguishes three positions about the significance of higher-order evidence.
On the top-down position, any combination of attitudes can turn out to be rationally permitted. Why? Because, for any combination of attitudes, an agent can have higher-order evidence that makes it rationally permitted for her to believe that that combination of attitudes is rationally permitted, and there's a trickle-down effect: an agent's permission to believe that a combination of attitudes is permitted is thereby also going to be permission to have that combination of attitudes.
On the bottom-up position, the rational requirements are what they are; some combinations of attitudes are permitted and some are not. And if a combination of attitudes is not permitted, an agent can never have higher-order evidence that makes it rationally permitted for her to believe that it is permitted. As Titelbaum puts it: ``What's forbidden is forbidden, an agent's beliefs about what's rational are required to get that correct, and no amount of testimony, training, or putative evidence about what's rational can change what is rationally permitted or what the agent is rationally permitted to believe about it" (p. 279).
The mismatch view combines elements of these two. For any combination of attitudes, an agent can have higher order evidence that makes it rationally permitted for her to believe that that combination of attitudes is rationally permitted, but there's no trickle-down effect: what's forbidden is forbidden. So, if a combination of attitudes is not rationally permitted, an agent can (given the right sort of higher-order evidence) be permitted to believe that that combination of attitudes is permitted, but the combination of attitudes will in fact nevertheless remain forbidden.
Some questions about these positions:
How do the responses we discussed to Lasonen-Aarnio's puzzle fit in to this taxonomy? (I asked about this in the lecture.)
It's clear enough that the mismatch view is inconsistent with the Fixed Point Thesis. (Make sure you understand why.) As for the top-down position, Titelbaum says that it is not in direct conflict with the Fixed Point Thesis, but he says that, in the end, it does turn out to be inconsistent with that thesis. What does he mean here? What is his argument, exactly?
Discussion chairs: Julian Kessler (julian[dot]kessler2[at]stud[dot]sbg[dot]ac[dot]at), Benedikt Leitgeb (benedikt[dot]leitgeb[at]stud[dot]sbg[dot]ac[dot]at)
Lecture notes: Week 6
Mandatory reading:
Robert Steel, "Against Right Reason", 2019 (also available here)
Optional reading:
Miriam Schoenfield, "Bridging rationality and accuracy", 2015 (also available here)
Questions for reflection
In this paper Steel develops and defends an argument for the rejection of views like Titelbaum's (and perhaps views like Lasonen-Aarnio's as well) -- i.e., an argument for the claim that what one ought to do in response to higher-order evidence is to adjust one's (first-order) beliefs. It's an argument based on a familiar observation, one we've discussed already: that, in certain high-stakes situations, it seems like it's a really bad idea to make decisions based on your beliefs if you have higher-order evidence suggesting that those beliefs are rationally flawed. The argument, which Steel calls the simple argument, goes something like this.
Premise 1. There exist situations in which the following facts hold: an agent believes p and has in fact reasoned well, so that, if no higher-order evidence were present, the agent's belief would certainly count as rational; the agent has some higher-order evidence that, at least intuitively, makes it seem extremely likely (from her perspective) that she has not reasoned well; and the agent is in a position in which she needs to make an extremely high-stakes decision about what action to take, where what action will result in the best outcome depends entirely on whether p is true.
Premise 2. In such a situation, the agent should not rely on her conclusion that p as she makes her decision about what action to take.
Premise 3. Insofar as Premise 2 is true, the best explanation of the fact that the agent shouldn't rely on her conclusion that p in her decision-making is that, given her higher-order evidence, she shouldn't believe that conclusion.
Conclusion. Agents should at least sometimes adjust their first-order beliefs in the face of higher-order evidence.
Premise 1 is clearly true; we've talked about such situations in class already. Steel seems to think it's pretty clear that premise 2 is true as well. (What do you think?) If that's right, then premise 3 is where all the action is. And in fact, some defenders of views like Titelbaum's do seem to be inclined to deny premise 3: the way they try to handle situations like those described in premise 1 is by providing some story on which the agent should believe that p but, for some reason, shouldn't act on that belief. The bulk of Steel's paper is devoted to trying to show that none of these stories can be successful.
In general, how such a story proceeds is as follows. First, it is argued that, in a situation like those described in premise 1, the agent's belief (her belief that p, that is) has some particular property as a result of her higher-order evidence, a property the belief wouldn't have if the higher-order evidence were not present. Then, it is argued that the fact that the belief has this property can explain why, given features of the situation (including the high stakes of the decision), the agent should not act on that belief. (How does Weatherson's story, for example, fit into this structure?)
Steel says that any such story, in order to be successful, needs to meet at least the following constraints:
Whatever the property is that's under discussion, it must be clear that this property is not what tracks whether an agent should have the belief. (You can't just say, for instance that the reason the agent shouldn't act on the belief is that the belief is unreasonable, unless you explain what "unreasonable" means in such a way that it's clear that it can be the case that the agent should have the belief despite the fact that doing so is unreasonable.)
The story must be able to cover every situation like those described in premise 1.
Steel then argues that none of the stories offered in the literature meets both of these constraints. So:
Which of these constraints does Weatherson's proposed story violate, according to Steel? Why? What is Steel's argument here? In particular, what is the role that Steel's TAUT OR NOT case plays in the argument? Why is this case supposed to present a problem for pictures like Weatherson's?
What about the story according to which the agent's belief, though it's correct in this case, is the manifestation of a "bad habit", a "disposition that will get [her] into trouble elsewhere" (p. 446)? How is this story supposed to work, and what does Steel think is wrong with it? (Note that Steel canvasses two possible ways for this story to go and argues against both of them.) At one point he says that the story (on one of the ways it might go, at least) "makes bad predictions" (p. 448) -- what does he mean by this, exactly?
A final note: the paper's main argument, which is what I have been talking about here, concludes at the end of Section 6. In Section 7 Steel sketches his own positive view about epistemic norms, a view that is in line with his conclusion that one should adjust one's beliefs in the face of higher-order evidence. And in Section 8 he deals with an objection of a different sort from the ones discussed above. Do your best with Section 7 -- we'll be discussing these issues in some detail, but not until January. As for Section 8, you don't need to worry too much about the issues under discussion there -- we won't be dealing with them in this course.
Discussion chairs: Cornelia Mayer (cornelia[dot]mayer3[at]stud[dot]sbg[dot]ac[dot]at), Kilian Wehner (kilian[dot]wehner[at]stud[dot]sbg[dot]ac[dot]at)
Lecture notes: Week 7
Problem set 2 due at start of class
Mandatory reading:
Sophie Horowitz, "Epistemic akrasia", 2014 (also available here)
Optional reading:
Mattias Skipper, "Higher-order defeat and the impossibility of self-misleading evidence", 2019 (also available here)
Questions for reflection
Horowitz is investigating level-splitting views: views in the neighborhood of Lasonen-Aarnio's. These are views on which, in the usual kinds of cases of (misleading) higher-order evidence, epistemic akrasia is rational: it's rational for me to be in a state that contains both the belief that P and the belief that my evidence doesn't support P. Or, in other words, it 's rational for me, in such cases, to believe that P (since P is in fact supported by my total evidence) and also, at the same time, to believe that the rational response to my evidence is not to believe that P (since this is what my higher-order evidence suggests). Much of the paper is devoted to pointing out odd or objectionable consequences of views like this.
One initial problem, says Horowitz, is that, if it's rational for me to be in an akratic state like the one described above, then I can, by some simple reasoning about my own beliefs, come to believe (rationally) that my evidence is misleading. So:
Why is this so, exactly? What is the reasoning I can use here?
Why is it supposed to be strange that, in a situation like this, I can rationally conclude that my evidence is misleading?
In particular, how is this sort of situation different from the lottery situation she discusses on p. 727, in which she takes it not to be strange that an agent can rationally conclude that her evidence is misleading?
A second initial problem is supposed to arise from thinking about how I'll behave if I'm in an akratic state like the one described above. Presumably, insofar as it's rational for me to believe that P, it's also rational for me to act on that belief. But, according to Horowitz, if I'm asked to "justify or explain [my] behavior", I'll be "at a loss" . Why does she think so?
The former worry depends on a picture on which there's a straightforward connection between belief and reasoning, and the latter worry depends, analogously, on a picture on which there's a straightforward connection between belief and action. In Section 5, then, Horowitz responds to the objection that, in cases in which higher-order evidence is present, these straightforward connections are not present. So:
What does she say here?
In particular, is there a connection between what she says here and what Steel says in his paper? If so, what is it?
Why does she think that, even if we grant that these straightforward connections are not present, the above worries still show that there's something wrong with level-splitting views?
In Section 4 Horowitz develops another worry: she argues that there's no good way of incorporating the level-splitting verdict into a plausible general account of in what cases higher-order evidence should have an effect on one's first-order beliefs. There are two different worries here (one discussed in 4.1 and 4.2 and a second discussed in 4.3), both of which are important to understand. But I want to ask in particular about her argument against a thesis she calls Proxy, which says that higher-order evidence should have an effect on first-order beliefs "only insofar as it serves as a proxy for first-order evidence" (p. 729). She says that, if Proxy is correct, then higher-order evidence should have an effect on first-order beliefs "only insofar as one does not also possess the relevant first-order evidence" (p. 730). So:
What is her argument for the claim that Proxy is not correct?
What about a view according to which higher-order evidence should have an effect on first-order beliefs only insofar as one has not already evaluated the relevant first-order evidence? Horowitz doesn't mention such a view. Is such a view compatible with Proxy? Is it plausible? Can Horowitz's argument be used to show that it's incorrect?
Finally, Horowitz discusses cases -- in particular, the Dartboard case -- in which, she says, epistemic akrasia is rational. But she says that the reason epistemic akrasia is rational in these cases is that they have a particular feature, one that is not shared by the usual cases of higher-order evidence. What is that feature, and why does she think it's the important feature here?
Discussion chairs: Akram Shatalebi (akram[dot]shatalebi[at]stud[dot]sbg[dot]ac[dot]at), Johannes Hüffer (johannes[dot]hueffer[at]stud[dot]sbg[dot]ac[dot]at)
Lecture notes: Week 8
Mandatory reading:
David Christensen, "Disagreement, drugs, etc.: From accuracy to akrasia", 2016 (also available here)
Optional reading:
Sophie Horowitz, "Predictably misleading evidence", 2019 (also available here)
Questions for reflection
It's important, if you want to understand what Christensen is up to in this paper, to understand how the Simple Thermometer Model (STM) works and why it's supposed to be intuitive. So make sure you understand what he means when he says that, on this model, if you have some initial credence in P and then gain some higher-order evidence suggesting that the reasoning by which you formed that initial credence was flawed, your new credence in P should match your independent hypothetical credence in P -- i.e., the credence you have in P independent of the first-order reasoning by which you formed your initial credence in P and conditional on the fact that an agent like you formed that initial credence. (I tried to explain this a bit in our most recent lecture -- see the lecture notes.)
Why does setting your credence this way amount to treating yourself like a thermometer? Make sure you understand the analogy here.
Consider a case of disagreement. The STM, Christensen suggests, delivers the verdict that, if you antecedently take yourself and the person you're disagreeing with to be equally good reasoners who have the same evidence, you should give your own (initial) opinion and the other person's opinion equal weight. How, exactly, does the STM deliver this verdict? (The thermometer analogy is worth remembering here. The idea is to treat yourself and the other person as two thermometers that you antecedently think are equally likely to get things right. Think about why the STM allows you to think of yourself and the other person in this way.)
There's a particular worry about the STM that motivates Christensen to examine an alternative picture, which he calls the Idealized Thermometer Model (ITM). The worry is that, in certain cases -- for instance, cases in which the agent gains higher order evidence before she ever forms an initial credence in P -- the agent doesn't have an independent hypothetical credence at all, which means there's just nothing for her new credence to match. (Make sure you understand exactly how this worry is supposed to work. Again, I tried to explain it a bit in our most recent lecture.)
The ITM gets around this problem by saying that your new credence in P should match, not your actual independent hypothetical credence in P (since, after all, you might not have one), but (roughly) the independent hypothetical credence in P that you would have if you were fully rational. So what's relevant is not your actual initial credence -- it's the initial credence you should have formed. And what's relevant is not how likely you actually take P to be, independent of your first-order reasoning and conditional on an agent like you having formed that initial credence -- it's how likely you should take P to be, independent of your first-order reasoning and conditional on an agent like you having formed that initial credence. (Make sure you understand exactly why this account avoids the worry mentioned above.)
Can you think of some middle ground, some way of avoiding the worry without moving all the way to the ITM? (If you can, think about why Christensen might prefer the ITM to the less extreme account you've thought of.)
Consider a case of disagreement again. Notice that the ITM tells you to do something very different in the case in which you are the one who in fact evaluated the (first-order) evidence correctly than it does in the case in which the other person is the one who in fact evaluated the evidence correctly. Suppose you've initially formed a low credence in P, and the other person has initially formed a high credence in P. In the case where you're right, the ITM says that your new credence in P should match the hypothetical credence you should have in P, indepedent of your first-order reasoning and conditional on an agent like you having formed a low initial credence in P. But in the case where you're wrong, the ITM says that your new credence in P should match the hypothetical credence you should have in P, indepedent of your first-order reasoning and conditional on an agent like you having formed a high initial credence in P. Is this a problem? Why or why not? (Note that this is relevant to the worries for the ITM that Christensen discusses in sections 5a and 5b.)
Finally, Christensen says that, if the ITM is correct, then agents who have higher-order evidence should often be epistemically akratic. The reason, very roughly, is that if an agent who has higher-order evidence adjusts her credence in P in accordance with the ITM, she'll end up suspecting that she has failed to obey the ITM. So, insofar as she takes obeying the ITM to be a rational requirement, she'll take her credence in P to be irrational.
How does the argument here (the one Christensen presents in section 5e) go? Make sure you understand how the case he adapts from Schoenfield, on which the agent gets evidence that she's anti-reliable, is supposed to work.
Why, exactly, does Christensen think it's not really a problem that the ITM has this implication? Is he right about this?
Discussion chairs: Michael Huemer (michael[dot]huemer2[at]stud[dot]sbg[dot]ac[dot]at), Anna Scácelová (anna[dot]skacelova[at]stud[dot]sbg[dot]ac[dot]at)
Lecture notes: Week 9
Mandatory reading:
Adam Elga, "The puzzle of the unmarked clock and the New Rational Reflection principle", 2013 (also available here)
Optional readings:
Maria Lasonen-Aarnio, "New Rational Reflection and internalism about rationality", 2015 (also available here)
Richard Pettigrew & Michael G. Titelbaum, "Deference done right", 2014 (also available here)
Questions for reflection
The first thing to note here is that the puzzle of the unmarked clock is in all important respects just like Horowitz's dartboard case -- it's a case where (i) it's rational for you to be less than certain of what your evidence is, (ii) because of this uncertainty, it's rational for you to be unsure what's rational for you to believe, and (iii) as a result, it seems like it's rational for you to be epistemically akratic.
Why (i)?
The thought is that when you look at the clock, you can't be sure exactly where the minute hand is pointing. (Or even where it seems to you to be pointing. If you're worried about the distinction here, you can restructure the case so that it's all about how the clock seems to you to read instead of how it actually reads, and the puzzle will still arise.)
As Elga puts it: "If your eyes are like mine, it won't be clear whether the clock reads 12:17 or some other nearby time" (p. 128).
Why (ii)?
Elga stipulates that, whatever the clock reads, it's rational, given your evidence, for you to be 99% confident that the clock reads some time within one minute of that. So, if it actually reads 12:17, it's rational for you to be 99% confident that it reads either 12:16 or 12:17 or 12:18.
He also stipulates, for simplicity's sake, that it's rational for you to divide that 99% equally among the three possibilities, which means it's rational for you to be 33% confident the clock reads 12:16, 33% confident it reads 12:17, and 33% confident it reads 12:18. This doesn't affect the basic structure of the case, but it does make our calculations simpler.
Now: suppose you know all of these facts about how rationality works in this situation. Then you know that what's rational for you to believe depends on what the clock in fact reads. So, insofar as you can't be sure exactly what the clock reads, you also can't be certain what's rational for you to believe.
Why (iii)?
Suppose the clock really does read 12:17. Then it's rational for you to be 99% confident that the clock reads either 12:16 or 12:17 or 12:18. Suppose that, rationally, you are 99% confident of this. What should you yourself think about whether this level of confidence is rational?
Well, you should think that it's rational only if the clock really reads 12:17. If the clock really reads 12:18, for instance, then you should be 33% confident that it reads 12:19, and if the clock really reads 12:16, then you should be 33% confident that it reads 12:15. More generally: in any case in which the clock does not actually read 12:17, you should be much less than 99% confident that the clock reads either 12:16 or 12:17 or 12:18.
So, since you're only 33% confident that the clock really does read 12:17, you should be only 33% confident that it's rational for you to be 99% confident that the clock reads either 12:16 or 12:17 or 12:18, and you should be 67% confident that it's rational for you to be much less than 99% confident that the clock reads either 12:16 or 12:17 or 12:18.
So you should think your own degree of confidence that the clock reads either 12:16 or 12:17 or 12:18 is definitely not irrationally low and is probably irrationally high. This at least looks like an instance of epistemic akrasia.
Do you find this case convincing? Why or why not?
The second thing to note here is that the kind of akrasia that seems to be rational in this case is, in at least one important respect, very similar to the kind of akrasia that is, according to Christensen's Idealized Thermometer Model, rational in certain cases of higher-order evidence.
It is, of course, not similar in every important respect. In the unmarked clock case, for instance, the source of the oddness is that it's a case in which it's rational to be uncertain about what your evidence is. But cases of higher-order evidence aren't generally cases in which you're uncertain about what your evidence is.
But it is similar in the following respect: in both the unmarked clock case and the relevant cases of higher order evidence, you should think in advance that your evidence will lead away from the truth. At least, Christensen thinks so. We didn't talk about this last week, so I'm going to ask about it now.
Why is it, exactly, that Christensen takes higher-order evidence to be systematically misleading?
In what sense is your evidence in the unmarked clock case predictably misleading? (It may be useful here to look back at Horowitz's discussion of the dartboard case.)
What is the connection supposed to be between predictably misleading evidence and the rationality of epistemic akrasia?
The third thing to note here is that the principles Elga calls Rational Reflection and New Rational Reflection are, in a sense, anti-akrasia principles. Remember: what an anti-akrasia principle tells you is, more or less that there should be a match between you believe about whether P and what you believe about what it would be rational to believe about whether P. The principles Elga is discussing are attempts to generalize this idea to a degree-of-belief (or credence) framework.
The basic thought is that you might be less than certain about what credence distribution is (ideally) rational. But, for any credence distribution you think might be rational, you can suppose that it's rational.
What Rational Reflection says, then, is that your degree of belief in any proposition, on supposing that cr* is the rational credence distribution, is whatever credence cr* assigns to that proposition.
Rational Reflection and New Rational Reflection are principles that tell you (in a credence framework) exactly how and in what sense your first-order attitudes should match your attitudes about what first-order attitudes are rational.
Now: Rational Reflection is inconsistent with our above conclusion about the unmarked clock case (i.e., that you can rationally be in a situation where you think your own credence in a proposition is definitely not irrationally low and is probably irrationally high). Why? Roughly, because Rational Reflection is inconsistent with the claim that you can ever be rational while being less than certain about what credences you should have.
Why is this latter claim true? Elga gives a proof -- make sure you understand it.
Do you think this consequence of Rational Reflection is plausible? Why or why not?
Elga takes there to be something defective about Rational Reflection. What is the problem supposed to be? And why does moving to New Rational Reflection fix it? Why is it, exactly, that New Rational Reflection is not inconsistent with our above conclusion about the unmarked clock case? And finally: how, if at all, does this relate to the idea that the reason epistemic akrasia is rational in the unmarked clock case (or Horowitz's dartboard case) is that your evidence is predictably misleading?
Discussion chairs: Alena Egorenko (alena[dot]egorenko[at]stud[dot]sbg[dot]ac[dot]at), Alba Ramírez Guijarro (s1077847[at]stud[dot]sbg[dot]ac[dot]at)
Lecture notes: Week 10
Mandatory reading:
Ru Ye, "Higher-order defeat and intellectual responsibility", forthcoming (also available here)
Optional reading:
Declan Smithies, "Ideal rationality and logical omniscience", 2015 (also available here)
David Christensen, "Akratic (epistemic) modesty", forthcoming (also available here)
Questions for reflection
Ye's paper has two main parts. In the first part, she introduces a question about how to explain the phenomenon of higher-order defeat and argues that the usual story (the one endorsed by, e.g., Christensen) answers this question incorrectly. Then, in the second part, she motivates a particular view about how justification works by arguing that this view allows for a satisfying answer to the question she introduced in the first part.
But before we get into that, something to note about the way Ye sets up her discussion. We've generally been thinking of higher-order evidence as evidence that calls into question our beliefs' rationality, but Ye suggests that we should instead think of it as evidence that calls into question our beliefs' reliability. So:
Why does she think this is so? Do you find her reasons convincing?
And something to think about as you work through the rest of the paper: Does the distinction here make a significant difference to Ye's central argument? If so, in what way does it make a difference?
Now on to Ye's main question: when an agent's justification for believing P is defeated by higher-order evidence, does the higher-order evidence defeat that agent's propositional justification, or does it leave the propositional justification in place and instead defeat only the agent's doxastic justification?
In order to understand that question, we need to understand the difference between propositional and doxastic justification. The general idea is this. To have propositional justification to believe P is just to have good reason to believe P. (Note: this is compatible with different views about what counts as good reason to have a belief. If, for instance, you think that what matters is what evidence you have -- which is more or less the supposition we've been working under -- then you might say that what it takes for you to have good reason to have a belief is just for that belief to be supported by your total evidence.) To have doxastic justification, on the other hand, is, first, to have propositional justification (i.e., good reason) to believe P, and second, to actually believe P in a way that is properly based on your propositional justification (i.e., to actually believe P on the basis of your good reason). When we say that your belief is justified, what we're talking about is doxastic justification. Propositional justification is an ingredient of doxastic justification.
Note: this means you can have a belief that is unjustified in one of two ways. You might believe that P without having good reason to believe that P. That is, you might not have propositional justification to believe that P. Or your might believe P and have good reason for doing so, but your belief might not be properly based on that good reason. In that case, your belief isn't justified despite the fact that you have propositional justification to believe P. So: What does it mean, exactly, to say that a belief is properly based on good reason? And why is this required for your belief to be justied?
Given all of this, Ye's question becomes: when an agent's justification for believing P is defeated by higher-order evidence, does the higher-order evidence rob the agent of good reason to believe P, or does it leave that good reason in place and instead just make it impossible for the agent's belief to be properly based on that good reason?
Ye suggests that the usual story of how higher-order defeat works is committed to the second possibility: higher-order evidence leaves propositional justification in place but makes it impossible for the agent to properly base her belief on that propositional justification. Why, exactly, does she think the usual story is committed to this?
Ye argues that, if propositional justification remains in place, there just is no plausible story to tell about how it is that higher-order evidence makes it impossible for an agent's belief to be properly based on that propositional justification. If she's right, then insofar as higher-order defeat happens at all, it must be that higher-order evidence does in fact rob the agent of propositional justification. So: What is her argument here? Make sure you understand its structure. (Note: she discusses inferential belief and non-inferential belief separately. Make sure you understand both arguments.)
This brings us to the second part of the paper, where Ye proposes a condition on propositional justification that might explain why higher-order evidence robs agents of propositional justification. The proposed condition is roughly this: an agent has propositional justification to believe P only if there is available in principle a way for someone in the agent's evidential situation to hold that belief responsibly. It's easy enough to see that, if this is right, then, if higher-order evidence can make it impossible in principle for someone in the agent's evidential situation to believe P responsibly, then higher-order evidence can rob the agent of propositional justification. But there's obviously more to the story here. So:
What does it take, on Ye's view, to hold a belief responsibly? Why does she think so? She suggests that believing responsibly, in this sense, is required for (doxastic) justification. Is this plausible? Why or why not?
Is it plausible, on Ye's view of what it is to hold a belief responsibly, that higher-order evidence can make it impossible in principle for someone to hold a belief responsibly? Why or why not?
For Ye's proposed condition to be correct, it needs to be the case, not just that actually believing responsibly is required for doxastic justification, but that possibly believing responsibly is required for propositional justification. Does she give an argument for the latter claim? If so, what is it?
Discussion chairs: Lukas Lautischer (lukas[dot]lautischer[at]stud[dot]sbg[dot]ac[dot]at), Dominik Hinterhofer (dominik[dot]hinterhofer[at]stud[dot]sbg[dot]ac[dot]at)
Lecture notes: Week 11
Mandatory reading:
FBE, excerpts from Chapter 4: "Updating by conditionalization"; you should read from the beginning of the chapter to the start of Section 4.3 (on p. 108)
FBE, excerpts from Chapter 10: "Accuracy arguments"; you should read the chapter's introduction as well as Section 10.2
Note 1: No need to read Section 10.1.
Note 2: Don't worry about the details of the proof of the Gradational Accuracy Theorem. Just make sure you understand what the theorem says and why it's significant.
Lecture notes: Week 12
Midterm paper due at start of class
Mandatory reading:
FBE, Section 10.5 of Chapter 10: "Accuracy arguments"
Note: This section is only three pages long. Please read it carefully.
Lecture notes: Week 13
Problem set 3 due at start of class
Mandatory reading:
Miriam Schoenfield, "An accuracy based approach to higher order evidence", 2018 (also available here)
Optional reading:
Darren Bradley, "Self-location is no problem for conditionalization", 2011 (also available here)
Questions for reflection
In the first two sections of Schoenfield's paper, she rehearses the debate over higher-order evidence, introduces the basic idea of the accuracy-first program in epistemology, and then discusses, in broad terms, how the accuracy-first program might bear on the higher-order evidence debate. The main lesson of these sections can be stated as follows. It's intuitive that the agent in (for instance) the hypoxia case should, if her goal is to be accurate, adjust her beliefs in the face of higher-order evidence -- i.e., should calibrate; the entire point of calibrating, after all, is to take into account evidence that she's likely to have made calculation errors. But this gives rise to a puzzle, because there's another way of responding to her evidence that, by her own lights, is expected to be more accurate than calibrating -- namely, steadfasting: ignoring her higher-order evidence and simply responding correctly to her first-order evidence. Why does she expect steadfasting to be more accurate? Because, if she responds correctly to her first-order evidence, she'll do her calculations correctly and so will become highly confident in the correct answer about whether she has enough fuel, while if she calibrates, she'll only have a middling degree of confidence in the correct answer about whether she has enough fuel. Schoenfield's goal is to make sense of what's going on here.
The obvious thing to say here is that the reason the agent is not rationally required to steadfast (and is required to calibrate instead) is that steadfasting is not a legitimate candidate response at all, for the simple reason that, once the agent has gotten the higher-order evidence, she should not expect that she can steadfast. What the higher-order evidence suggests, after all, is that, in the state she's in, she's not likely to do her calculations correctly. But Schoenfield says this response is "unpromising". Why does she think so?
The main body of the paper -- the portion that comes after these two introductory sections -- is divided into two parts. In Part I, Schoenfield argues, relying on the standard accuracy-based approach used by Greaves and Wallace, that steadfasting is indeed more expectedly accurate than calibrating. Then, in Part II, she introduces an alternative accuracy-based approach -- the "planning framework" -- and shows that, on this approach, calibrating turns out to maximize expected accuracy.
On Part I
The basic argument here, at least in its initial form, goes something like this:
Conditionalization is the update procedure that maximizes expected accuracy (by Greaves and Wallace's argument).
If one conditionalizes, one thereby steadfasts.
Therefore, steadfasting is more expectedly accuracy than calibrating.
The obvious question is: why think premise 2 is true? And the answer, which Schoenfield gives in Section 4, can be given as follows. Remember: what Greaves and Wallace's proof shows is that at the time before undergoing a learning experience, the agent will take conditionalizing on whatever evidence she gains during that learning experience to be the response to the learning experience that maximizes expected accuracy. So let's suppose that the agent in the hypoxia case (Schoenfield calls her Aisha) is sitting at home at 10pm on Sunday, considering all the evidence she might gain over the course of the learning experience she's going to undergo between now and 10am on Monday. One of the possible bodies of evidence under consideration, we'll suppose, is E&H, where E is some huge proposition describing all the first-order evidence she'll in fact gain between now and 10am on Monday and where H is the proposition that Aisha is hypoxic at 10am on Monday. Suppose also that G is the proposition that Aisha, at 10am on Monday, has enough fuel to reach the farther island. The question, then, is: at 10pm on Sunday night, what is Aisha's conditional-on-E&H credence in G? And the answer is pretty clearly that it's very high: E contains all the information needed to do the fuel calculations, and as for H, the information that Aisha will be hypoxic at 10am on Monday doesn't call into question her ability to calculate correctly on Sunday night. So her initial conditional credence in G is very high, which means that, if she conditionalizes on E&H, then her new unconditional credence in G will be very high. But this is just to say that, if she conditionalizes, she'll steadfast. (Remember, if she calibrates, her new credence will be middling, not high.)
Is this convincing? Why or why not?
One usual response to this kind of argument is that it fails to take into account the role of self-locating belief -- belief, not just about what the world is like, but about who I am and where I am in the world (i.e., where I am in time and space). The worry can be put as follows. What explains the difference between how Aisha should respond to her supposition of E&H on Sunday night and how she should respond to actually learning E&H on Monday morning is that, on Monday morning, but not on Sunday night, H tells her something about her present situation. (This basic point was made in the very first paper we read for this course, Christensen's "Higher-order evidence", which Schoenfield discusses at some length.) We might say that what Aisha learns is not just that Aisha is hypoxic t 10am on Monday; what she learns is that I am hypoxic now. And when, at 10pm on Sunday, she supposes E&H, she's not supposing that. So it's no surprise that her later unconditional credence doesn't match her earlier conditional-on-E&H credence; E&H just doesn't capture everything she learns, since it doesn't capture this self-locating information.
How, exactly, might we avoid this problem? Schoenfield considers the possibility of updating on self-locating evidence. In particular, she considers the possibility of replacing H with a self-locating evidence proposition -- the proposition that I am hypoxic now -- and leaving everything else the same. She suggests that this is a way of implementing Christensen's proposal. Is she right about this? Why or why not?
At any rate, Schoenfield suggests that allowing for self-locating evidence won't help us avoid the conclusion that steadfasting maximizes expected accuracy. I'm not going to say too much about all this here, because I'm going to focus on it in my lecture. But I will say that the main conclusion she comes to is that Greaves and Wallace's proof, appropriately generalized, gives the conclusion that the accuracy-maximizing response to a learning experience is always going to be to conditionalize on a proposition that is not self-locating, and conditionalizing on this non-self-locating proposition is always going to yield steadfastness.
On Part 2
Despite all this, Schoenfield wants to say that there's a sense in which calibrating is the rational thing to do. So she introduces a new accuracy-based approach that, she says, gives us that conclusion. The basic observation she makes is this: Greaves and Wallace, when they calculate the expected accuracy of an update procedure, do so by calculating the expected accuracy of conforming to that update procedure. So, for instance, the expected accuracy of conditionalization is just the expected accuracy of the credences that would result from successfully conditionalizing. But we might instead consider the expected accuracy, not of actually conditionalizing, but of adopting conditionalization as our preferred update procedure -- that is, of forming a plan to conditionalize. Schoenfield's suggested approach -- again, the planning framework -- is to regard as rational, not the update procedure conforming to which would maximizes expected accuracy, but the update procedure planning to conform to which would maximize expected accuracy.
Here's why this is a significant difference: the agent might be able to predict that, even if she plans to conform to some update procedure, she'll fail to do so and will end up conforming to some other update procedure instead. (Indeed, we might think of the kind of higher-order evidence that's at play in the hypoxia case as precisely evidence that suggests that, if the agent tries to conform to certain update procedures, she'll fail.) And in a case like this, the expected accuracy of conforming to that update procedure will be different from the expected accuracy of planning to conform to that update procedure.
Suppose, for instance, that Aisha plans to steadfast in the face of her hypoxia evidence. That means she plans to do her calculations correctly and arrive at a very high credence in the correct answer about whether she has enough fuel. But even if this is her plan, she should not expect, given her hypoxia evidence, that this is what she'll in fact do. She should expect that, in trying to implement her plan, it's at least somewhat likely that she'll make a mistake and end up arriving at a very high credence in the incorrect answer about whether she has enough fuel. So, although the expected accuracy of actually steadfasting is quite high, the expected accuracy of planning to steadfast is significantly lower.
Moreover, nothing similar is true of calibrating: Aisha expects that, if she plans to calibrate, she'll in fact calibrate. So the expected accuracy of planning to calibrate is exactly the same as the expected accuracy of calibrating. So, despite the fact that steadfasting is more expectedly accurate than calibrating, it may well be the case that planning to steadfast is less expectedly accurate than planning to calibrate.
And indeed, Schoenfield proves, given certain assumptions, that this is so: she proves that calibrating is the response to higher-order evidence that maximizes expected accuracy.
What assumptions is she relying on? Are they plausible?
Do you find the planning framework plausible, or do you find the standard framework more plausible? Or do you think they just give us distinct but equally legitimate ways of thinking about rationality?
One of the reasons Greaves and Wallace work with the standard framework is that they are thinking of a highly idealized kind of rationality: they are assuming that all possible ways of responding to the evidence are genuinely available to the agent, in the sense that, if she tries to conform to any such procedure, she'll succeed. What is the role of idealization, if any, in Schoenfield's framework?
Discussion chairs: Jon Parnell (jonathan[dot]parnell[at]stud[dot]sbg[dot]ac[dot]at), Mason Kreidler (s1073843[at]stud[dot]sbg[dot]ac[dot]at)
Lecture notes: Week 14
Final paper due Sunday 28 February, 23:59