I recently finished my postdoc at Cornell University where we developed an LLM-powered personalized reading assistant for medical researchers. Currently, I continue to work on functional AI for scientific research. I finished my Ph.D. in August 2019 with a focus on learning from human feedback for human-robot interaction. I also have a postdoc from Stanford University where I worked at the intersection of value alignment, healthcare, and food.
Value-aligned AI: Powerful AI systems should imbibe human values if we want to avoid unintended consequences of the AI revolution. However, it is not easy to specify human values in a way that AI can make use of. I have been working on enabling AI systems to learn human values autonomously. In my doctoral research, I designed novel interactions that would allow AI systems or robots to seek alternative forms of guidance from humans and developed active learning algorithms that enable them to learn human values faster.
AI for social good: The pandemic and the associated propagation of medical misinformation got me interested in AI for healthcare. One of the stepping stones towards making authentic medical information more accessible is to simplify the knowledge for common people. My past research on medical text simplification built on the developments in controllable text generation to bridge the gap between online medical content generation and its accessibility.
AI for scientific research: More recently, I have been working on functional and value-aligned AI systems for open-ended text generation tasks in scientific research. My projects include:
Retrieval-augmented medical text simplification for interdisciplinary researchers in medicine where I explored LLMs' capability to personalize simplification.
Scientific/medical hypothesis generation using graph-aware LLM, where I explore how to align LLM-based graph reasoning for human-like hypothesis search