PUBLICATIONS (Korean)

ARTICLES

ABSTRACT: This paper is intended to explore a probabilistic relation between two credences, which has not been paid very little attention by Bayesian theorists. In particular, I will suggest a definition of the irrelevance between two conditional credences, and vindicate such a definition by means of what I will call Convervative Updating Principle (CUP). For this purpose, this paper is structured as follows: In Section 2, I will introduce Strict and Jeffrey Conditionalization and explain they satisfy CUP. I will attempt to define, in the standard Bayesian framwork, the irrelevance between two conditional credences in Section 3. Lastly, I will show that the defintion in the previous section renders Adams Conditionalization, which can deal with the credence updating due to the direct change in conditional credences, to satisfy CUP. And this result will be used to vindicate the adequacy of my definition of the irrelevance in question.


ABSTRACT: According to John Earman, Bayesianism can regard Kuhnian scientific revolutions as introductions of new theories to the existing belief set. I will assume in this paper that this view basically on the right track. However, it will be argued that some other works like Famakis (2008), Yeo (2011), Her (2015), and Chun (2011, 2019), which hinge on, or are related to Earman’s view, have some problems. Moreover, I will suggest a new Bayesian way of modeling Kuhnian scientific revolutions and so the introduction of new theories. In particular, it will be argued that imprecise probabilism, which represents an agent’s belief state by means of a set of probability functions, is a good alternative to a more standard approach.


ABSTRACT: This paper is intended to analyze the epistemological features of what is often called ‘Wason Selection Tasks’ by means of epistemic utilities that are measured by the concept ‘accuracy’. For this purpose, this paper is structured as follows: I will explain in Sec. 1 what could be regarded as epistemic decision problems, and suggest in Sec. 2 that the Wason Selection Task can be analyzed as a kind of such a decision problem. Moreover, it is explained in Sec. 3 how epistemic utilities, which are one of the core concept in epistemic decision theory, can be measured using the concept ‘accuracy’. On the basis of the above explanation and suggestion, Secs. 4 and 5 are devoted to comparing and evaluating some actions of the test subjects in the Wason Selection Task. In this regard, I will appeal to the two main principles in the standard epistemic decision theory—Dominance Principle (Sec.4), and the Principle of Maximization of Expected Epistemic Utilities (Sec.5). In doing so, we will understand the reason why we should conclude that the test subjects’ selections, in some sense, can be evaluated as epistemically rational. 



ABSTRACT:  Many philosophers have thought that Jeffrey Conditionalization, which is a version of Bayesian credence updating rules, cannot model the cases in which experience makes us lose our certainty. The objective of this paper is to critically examine the so-called Certainty Loss Problem, and to provide a way of modeling such a belief updating in the Bayesian framework. For this purpose, this paper is structured as follows: In section 1, I will give a general explanation about Bayesian epistemology. Section 2 will be devoted to explicating the relationship between the Certainty Loss Problem and Jeffrey Conditionalization. In Section 3, we will see a way of modeling the belief updating case at issue. In particular, I will introduce ‘Initial Jeffrey Conditionalization (or Ur-Jeffrey Conditionalization)’ and critically examine this kind of belief updating strategy. In the next section, I will provide another way of modeling the certainty loss case, which heavily depends on the imprecise probabilism framework. In doing so, we can find a theoretical relationship between certainties and precise credences.



ABSTRACT: As is well known, Accuracy-based Epistemology has justified that our precise belief state should be represented by a single coherent belief function—that is, a single probability function. Can such epistemology justify, by the same token, that our imprecise belief state should be represented by a set of coherent belief functions—that is, a set of probability functions? In this paper, I will attempt to respond to this question. In particular, I will show that the way of justifying Precise Probabilism can scarcely apply to justifying Imprecise Probabilism. To achieve this purpose, this paper is structured as follows: Section 2 will be devoted to explaining the way of justifying Precise Probabilism by means of the concept ‘accuracy’. In Section 3, I will formulate what I will call ‘the Principle of Epistemic Parity’, which will play a central role in attempting to justify the coherence of imprecise belief sets. Moreover, I will show in this section that we cannot justify the coherence of imprecise belief sets even if we appeal to the principle and the concept of accuracy. Then, I will critically examine two possible ways that rescue imprecise probabilism in Section 4.



ABSTRACT: This paper is intended to examine a relationship between the Principal Principle and Conditionalization. For this purpose, I will first formulate several versions of the Principal Principle and Conditionalization in Section 2. In regard to the relationship between the two norms in question, I will show in Section 3 that the Principal Principle and Conditionalization are complementary in two particular senses. The first complementarity is that we don't have to formulate every version of the Principal Principle if the credences evolves by means of Conditionalization. The second complementarity is that we don't have to require for rational agents to update overall credal state by means of Conditionalization if the agent satisfies the Principal Principle. This result can be regarded as a result that criticizes and supplements some existing works about the relationship between the norms.



ABSTRACT: The objective of this paper is to explicate some of the main arguments in the so-called Imprecise Probabilism. In Section 1, we will provide the motivations and formalism of Imprecise Probabilism. Sections 2 and 3 are devoted to introducing and explicating the way of updating our credences in the framework of Imprecise Probabilism, and the relationship between such probabilism and the decision theory. Lastly, we will examine in Sections 4 and 5 the relationship between Imprecise Probabilism and other epistemological programs—in particular, perceptual dogmatism and accuracy-based probabilism.



ABSTRACT: Many philosophers think that our credences should be related to objective chances in a certain way. Interestingly, Lewis’s Principal Principle, which is one of the most famous norms, turns out to be incompatible with the Humean Supervenience, which is a metaphysical thesis regarding chances. As well known, some modified versions of the Principal Principle are suggested so as to circumvent such an incompatibility—for example, New Principle and General Recipe. This paper aims to critically examine Ismael’s General Recipe. To do so, the incompatibility in question will be provided in Section 2, and it will be explained how some modified versions of the Principal Principle circumvents such an incompatibility. Sections 4 and 5 will be devoted to critically examining General Recipe. In particular, Pettigrew’s argument against General Recipe and Ismael’s response to the argument will be considered in Section 4, and then it will be argued in Section 5 that Ismael’s response must fail with the help of a modified version of Pettigrew’s argument.


ABSTRACT: Hawthorne, Landes, Wallman, and Williamson recently argue that Lewis’s Principal Principle implies the Principle of Indifference. This interesting claim seems to lead almost Bayesians to objectivism, which says that the rational initial credence function is uniquely determined by some objectivist rule like the Principle of Indifference. However, I will show in this paper that their argument has a fatal problem. In particular, it is shown that one of the assumptions, which the authors provide in order to derive their conclusion, has some weird consequences. 



ABSTRACT: This paper aims to respond to Weisberg's claim that the standard Bayesian epistemology cannot model an agent's belief updating that is triggered by some undermining evidence. Our epistemological intuition seems to require that the undermining evidence decreases some particular relevant credences. According to Weisberg, however, such a belief change cannot result from the standard Bayesian belief updating rules—i.e., (Jeffrey) Conditionalization. This is because probabilistic independence between some propositions is preserved under (Jeffrey) Conditionalization on the relevant evidence. Yet I will show in this paper that this conclusion is somewhat hasty. In particular, I will show that there is another version of conditionalization and that when one updates her credences by means of such a version, the belief updating originated in undermining evidence can be well modeled in the Bayesian framework. Some authors often call the version Higher Order Conditionalization.


ABSTRACT: This paper examines critically epistemic deference to experts. It seems rational that we defer to experts who are epistemically superior to us, However, this paper argues that such deference to experts could lead us to a dilemma. For this purpose, I will formulate, in sections 2 and 3, two principles regarding epistemic deference-i.e. General Deference Principle (GDP) and Negative Deference Principle (NCD). Section 4 is dedicated to providing diachronic Dutch Book arguments for GDP and NCD. I will show in section 5 that GDP and NDP are incompatible with each other. With the help of the arguments in sections 4 and 5, I will conclude in section 6 that some defenders of the epistemic deference to experts faces a dilemma.


ABSTRACT: The objective of this paper is to provide some critical comments on Chun's <귀납(Induction)>. For this purpose, this paper is structured as follows: In Section 1, I will explain briefly the structure and contents of Chun's book. Sections 2-4 respectively consider several problems regarding (i) Chun's motivation of distinguishing inductive logic from inductive methodology; (ii) the relationship between objectivity of likelihood and subjectivity in Bayesianism; and (iii) Chun's suggestion how to unify Bayesian and Frequentist approaches to inductive acceptance. 


ABSTRACT: The main objective of this paper is to examine critically White's claim that there is a conflict between Perceptual Dogmatism and Bayesian Theory of Confirmation. For this purpose, this paper is structured as follows: In Section 2, I will introduce White's argument. Section 3 is dedicated to explaining some elements of Bayesian Theory of Confirmation. In particular, I will provide an explanation of confirmation measures and Bayesian Favoring. Using these two conceptual apparatuses, it will be shown that, contrary to what White has thought, there is a way of supporting Perceptual Dogmatism by means of Bayesian Theory of Confirmation---in particular, Bayesian Theory of Favoring. 


ABSTRACT: This paper aims to extend the scope of Jeffrey conditionalization (JC). Many authors have thought that partial belief updates cannot be modeled using JC. However, I will argue in this paper that if we regard the belief updates as successive ones, we can model the partial updates successfully by means of JC. For this purpose, I will provide some definitions and a relevant example in Section 2. In Section 3, I will explain a relationship between JC and the partial belief updates. In particular, I explain why a singular Jeffrey conditionalization cannot model the partial belief updating. Lately, it is argued in Section 4 that partial belief updating can be regarded as successive belief updating composed of two different sub-belief updating, i.e evidential updating and corrective updating. And then, I suggest a Jeffrey rule for the partial belief updates.

 

ABSTRACT: The main objective of this paper is to vindicate the Bayesian belief updating rule, i.e. conditionalization. For this purpose, I introduce first what I call Irrelevance Principle, and show that this principle is equivalent to conditionalization. In turn, the principle is vindicated by means of Bayesian confirmation theory. That is, I suggest some theses that Bayesian confirmation theorists should accept, and prove that if Irrelevance Principle is violated, the theses cannot holds.

 

ABSTRACT: This paper concerns several versions of conditionalization. In particular, I will examine the relationship between Jeffrey conditionalization and the second order conditionalization concerning updating higher order credences. In section 2, I suggest explicitly what Jeffrey conditionalization and the second order conditionalization are. I will argue in section 3 that Jeffrey conditionalization conflicts with van Fraassen’s Reflection Principle while the second order conditionalization doesn’t. And I will also argue in section 4 that under some situations, Jeffrey conditionalization may lead agents to Moorean absurdity while the second conditionalization may not. As a result, I will claim that Jeffrey conditionalization is better than the second order contionalization at updating our higher order credences.

 

ABSTRACT: The radical probabilitists deny that propositions represent experience. However, since the impact of experience should be propagated through our belief system and be communicated with other agents, they should find some alternative protocols which can represent the impact of experience. The useful protocol which the radical probabilistists suggest is the Bayes factors. It is because Bayes factors factor out the impact of the prior probabilities and satisfy the requirement of commutativity. My main challenge to the radical probabilitists is that there is another useful protocol, q(E|Np) which also factors out the impact of the prior probabilities and satisfies the requirement of commutativity. Moreover I claim that q(E|Np) has a pragmatic virtue which the Bayes factors have not.

 

ABSTRACT: Gödel(1953/9) criticized that the syntactical view of Carnap was self-defeating. Against this criticism, Awodey and Carus(2003a,b) besides Ricketts(1996) claimed that as Gödel's argument has some faults, Carnap doesn't face to Gödel's criticism. In this article, I will examine critically their comments on Gödel's argument. Firstly it will be shown that Gödel's argument can be reconstructed in a different way from what Awodey and Carus(2003a,b) did, hence their criticism to Gödel's argument is inadequate. Secondly I will point that there is a mistake in Ricketts(1996)'s comment, but nevertheless when Gödel's argument is reconstructed along with Ricketts criticism, his argument will become ineffective in criticizing Carnap's view.

 

Books

05. 인과 (이영의 외 4인과 공저), 서광사, 2020.

05. 철학테마강의 (김상득 외 11명과 공저), 전북대학교출판원, 2019.

04. 입증 (이영의 외 2인과 공저), 서광사, 2018

03. 사고와 토론 (박준호 외 5명과 공저), 전북대학교출판원, 2016.

02. 철학의 숲, 길을 열다. (송하석, 정재영, 홍성기와 공저), 풀빛, 서울. 2012.

01. 철학의 숲, 길을 묻다. (송하석, 정재영, 홍성기와 공저), 풀빛, 서울. 2011.

 

Translations

04. Korean Translation of Ian Hacking's An Introduction to Probability and Inductive Logic (Korean Title: 확률과 귀납논리), 서광사, 2022,

01-2.  Korean Re-Translation of Kip Thorne’s Black Holes and Time Warps (Korean Title: 블랙홀과 시간여행), Seoul: 반니. 2016

02. Korean Translation of Lawrence Krauss’s Atom: A Single Oxygen Atom’s Odyssey From the Big Bang to Life on Earth … and Beyond (Korean Title: 외로운 산소 원자의 여행), Seoul: ez-book. 2005.

01-1. Korean Translation of Kip Thorne’s Black Holes and Time Warps (Korean Title: 블랙홀과 시간굴절), Seoul: ez-book. 2005. (Certification for Good Science Book awarded by KFASC)