About Me

Hanti Lin

Assistant Professor
Department of Philosophy
University of California at Davis

ika[AT]ucdavis.edu

I am a philosopher of science and formal epistemologist at UC Davis. I did my postdoc at the Australian National University and my PhD at Carnegie Mellon University. I have spent the past two or three years working on a project that aims to justify certain kinds of inductive inferences and to make some progress in our endeavor to reply to Hume's problem of induction. To set the bar very high, my project targets specifically at inductive inferences that are fundamental to the sciences but have hitherto been quite recalcitrant---resisting any justification in statistics, machine learning theory, or formal epistemology. I have at least two examples in mind: (a) enumerative induction to its full conclusion, e.g., that all ravens are black, not just that all the ravens you will observe are black; (b) causal inference without the (somewhat notorious) faithfulness condition or the like in machine learningTo justify those kinds of inductions, I articulate and defend an epistemological tradition that is very influential in science but often underrecognized and misunderstood in philosophy---the tradition that takes seriously the epistemic ideal convergence to the truth, following the footsteps of C. S. Peirce, H. Reichenbach and H. Putnam. See the next section for more details. You can find my CV here

Main Project: Learning-Theoretic Epistemology

If I am right, the epistemological tradition I just mentioned is best reconstructed as a commitment to the following core guidelines:

(i) Inferential procedures and learning methods for tackling one empirical problem or another should be evaluated and justified in terms of a certain distinguished group of epistemic ideals, the most important of which include (but are not limited to) convergence to the truth and its various modes

(ii) An inferential procedure is always evaluated with respect to how good it is for tackling one empirical problem or another. For there is no such thing as a universally best inferential procedure, best for tackling all possible empirical problems.

(iii) Convergence to the truth has many different modesThey are epistemic ideals for inferential procedures to achieve where possible; some are higher epistemic ideals than some others are. When tackling an empirical problem, we should look for what modes of convergence to the truth can be achieved, and try to achieve the best mode we can have

(iv) Taking convergence to the truth seriously does not imply ignoring other epistemic ideals. The epistemological tradition in question only says this: 

For any empirical problem P, a learning method or inferential procedure counts as one of the best for tackling P only if it satisfies this constraint: achieving the best achievable mode of convergence with respect to P

Note that this constraint is only claimed to be necessaryFeel free to pursue any other epistemic ideals you value in addition to convergence to the truth

(v) The above constraint, although concerned only with convergence, actually can lead to short-run epistemic constrains on what to believe, provided that we pursue certain additional epistemic ideals---especially the ideal about stable inquiry that Plato favors in Meno, and the Bayesian ideal about diachronic coherence.

Many people have contributed ideas to one of those guidelines or another. Contributors include, for example: 

- philosophers C. S. Peirce, H. Reichenbach and H. Putnam
- machine learning theorists E. M. GoldD. Angluin, and L. Valiant
- statistician R. Fisher, and even Bayesian statisticians P. Diaconis and D. Freedman

But none of them takes all the five guidelines seriously. I do. I call the above epistemological tradition learning-theoretic epistemology, for it is most recognizable in the many branches of learning theory as studied in theoretical computer science. 

In this project, I will articulate the core guidelines (i)-(v) clearly, and keep distance from additional theses that are entirely optional but potentially misleading. Once that is done, I will be able to argue that learning-theoretic epistemology deserves philosophers' attention: that it is able to reply to the Keynesian worry that we are all dead in the long, that it is compatible with Bayesianism, that it is neutral between externalism and internalism, and that it can even be welcome by coherentists and evidentialists despite its reliabilist flavor. 

This project has already produced something. I have been able to argue that, by taking all the core guidelines (i)-(v) seriously, we can justify certain kinds of inductive inferences that have long resisted justification in formal epistemology, statistics, and machine learning. See the following papers for details:

1. Modes of Convergence to the Truth: Steps toward a Better Epistemology of Induction

This paper aims to justify enumerative induction at its full strength---a task that very few formal epistemologists (if any) have attempted before. Slides are available here.


With the same justification strategy as in the preceding paper, this paper aims to justify causal inference without assuming what almost all theorists of causal discovery assume: the famous Causal Faithfulness Condition or the like.


This is a paper in statistics and machine learning theory, providing the theorems that are needed in the preceding, philosophical paper. This is joint work with Jiji Zhang.

For more details about the project, visit the project page

My older work focuses on the cognitive and conative roles of accepting sentences or propositions, especially the roles that it can or should play in inquiry, decision-making, or linguistic understanding--even for a Bayesian agent. That constitutes the bulk of my publications so far. I am also interested in philosophy of language and logic, especially the topics about compositional, non-truth-conditional semantics which is in line with expressivism. But for now I have to focus a lot more on the main, epistemological project before I can get back to a semantics paper I have presented several times: "When 'Or' Meets 'Might': Toward Acceptability-Conditional Semantics", which is available upon request.

Publications


Lin, H. (forthcoming) “Belief Revision Theory”, in Pettigrew, R. and Weisberg, J. (ed.) The Open Handbook of Formal Epistemology.

Hájek, A. and Lin, H. (2017) “A Tale of Two Epistemologies?”, Res Philosophica, 94(2): 207- 232.

Lin, H. (2017) “Enumerative Induction and Semi-Uniform Convergence to the Truth”, in Baltag, A., Seligman, J. and Yamada, T. (eds.) Logic, Rationality, and Interaction: Proceedings of the 6th International Workshop, LORI 2017, Lecture Notes in Computer Science, Springer, 362-376.

Kevin, K. T., Genin, K. and Lin, H. (2016) “Realism, Rhetoric, and Reliability”, Synthese, 193(4): 1191-1223.

Lin, H. (2016) “Bridging the Logic-Based and Probability-Based Approaches to Artificial Intel- ligence”, in Hung, T.-W. (ed.) Rationality: Constraints and Contexts, Amsterdam: Elsevier.

Lin, H. (2016) “The Meaning of Epistemic Modality and the Absence of Truth”, in Yang, C-M., Deng, D.-M., and Lin, H. (eds.) Structural Analysis of Non-Classical Logics, Berlin: Springer-Verlag.

Lin, H. (2014) "On the Regress Problem of Deciding How to Decide", Synthese 191: 661- 670.

Comments