I am a fifth-year Ph.D. student in the Department of Linguistics at New York University. I served as the organizer of Syntax Brown Bag, a series of syntax talks at NYU, and as the organizer of ML² Reading Group, a bi-weekly reading group on deep neural network models for natural language processing (NLP) and natural language understanding (NLU). I am not a wallaby.

I am interested in the syntactic structure of natural language and its accompanying interpretation. My primary interest is in the formulation of deontic modality. Inspired by the morphosyntax of Korean and Japanese modal expressions, I am developing a counterfactual-based account of deontic modality and seeking for possible implications it has on the general discussion of modality. I am currently concerned with the `if p, ought p' problem (aka Zvolenszky's puzzle), the problem of supererogation, the dynamic aspect of Professor Procrastinate, Ross' paradox, the miners puzzle, and strong permission.

I have also been interested in developing a parsing-based model of presupposition projection which is robust to crosslinguistic variation in word order. More specifically, the model ought not be sensitive to linear order but rather to the order of evaluation. I make a connection to continuation semantics in fleshing out the idea.

While many of my projects take common theoretical assumptions (Merge, Agree, possible worlds, and so on) for granted, I am also interested in computationally implementing the language faculty with an emphasis on how to link lower-level representations to higher-level abstract behaviors. In fact, I have always been interested in connecting lower-level entities to higher-level concepts: from circuit theory to transistors, from transistors to logic gates, from logic gates to adders and multipliers, and eventually from machine languages to high-level programming languages. I believe that natural language inference is in the heart of natural language understanding, as knowing how to judge entailment relations is a good indication of the ability to understand the meaning of an expression. I have been developing a model that better represents semantic composition under the distributional hypothesis and using the techniques from deep learning.

Prior to the aforementioned projects, I worked on the syntactic aspects of Korean pro-form kuleh. In this work, I show that the size of ellipsis correlates with possibilities in extraction, leading to the conclusion that syntactic configuration plays a crucial role in ellipsis.

Before I was introduced to linguistics, I was trained as an engineer at Seoul National University and received B.S. in Electrical and Computer Engineering. I worked at CDNetworks for 3 years as a program developer, developing and maintaining user interface, download manager, peer-to-peer application, and anti-reverse engineering module. Most of my projects were carried out in C++ and inline assembly.

Outside of linguistics, I am strongly expected to function as a father of two children. Over the past years I have become an expert in feeding kids and changing diapers. I am currently competing with my two-year-old daughter in learning Japanese.

You can view my CV here (updated 9/11/2018).