Hanti Lin

Associate Professor
Department of Philosophy
University of California at Davis
ika[AT]ucdavis.edu

I am a philosopher of science and formal epistemologist at UC Davis. I did my postdoc at the Australian National University and my PhD at Carnegie Mellon University. I have spent the past four years working on a project that aims to justify certain kinds of inductive inferences and to make some progress in our endeavor to reply to Hume's problem of induction. To set the bar very high, my project targets specifically at inductive inferences that are fundamental to the sciences but have hitherto been quite recalcitrant---have resisted any justification in statistics, machine learning theory, or formal epistemology. I have at least two examples in mind: (a) enumerative induction to its full conclusion, e.g., that all ravens are black, not just that all the ravens you will observe are black; (b) causal inference without the (somewhat notorious) faithfulness condition or the like in machine learning. To justify those kinds of inductions, I articulate and defend an epistemological tradition that is very influential in science but often underrecognized and misunderstood in philosophy---the tradition that takes seriously the epistemic ideal of convergence to the truth, following the footsteps of C. S. Peirce, H. Reichenbach and H. Putnam. See the following three papers for more details. You can find my CV here.

1. Modes of Convergence to the Truth: Steps toward a Better Epistemology of Induction, forthcoming in The Review of Symbolic Logic.

This paper aims to justify enumerative induction in its full strength---a task that very few formal epistemologists (if any) have attempted before. The slides presented at the 2018 Formal Epistemology Workshop are available here.

2. The Hard Problem of Theory Choice: A Case Study on Causal Inference and Its Faithfulness Assumption, (2019) in Philosophy of Science.

With the same justification strategy as in the preceding paper, this paper aims to justify causal inference without assuming what almost all theorists of causal discovery assume: the famous Causal Faithfulness Condition or the like. The slides presented at the 2018 PSA are available here.

3. On Learning Causal Structures from Non-Experimental Data without Any Faithfulness Assumption, (2020) in The Proceedings of Machine Learning Research.

This is a paper in statistics and machine learning theory, proving the theorems that are needed in the preceding, philosophical paper. This is joint work with Jiji Zhang.