I'm a Computer Science Major at Stanford, advised by Prof. Mehran Sahami.
Working as a student researcher at Stanford Artificial Intelligence Lab, I'm directly advised by Prof. Noah Goodman and Dr. Chen Shani. Before that, I was an independent student researcher working with Prof. Ellen Vitercik, and a research assistant advised by Prof. Tobias Gerstenberg at CiCLab. I was an intern at Meta and Apple too.
I'm also the President of Stanford Symbolic Systems Society, and a leadership organizer at Stanford AI Alignment.
I'm widely interested in algorithmic innovation and conceptual modeling, seeking to augment the impact and foster the growth of human intellectuality. Here's my current research:
Machine Learning Innovations: Semi-supervised Learning (Bridged Clustering), Param-efficient Finetuning (Attention Spectral Gating)
Cognitive Modeling in AI: Concept Modeling (Analogies), LLM Reasoning (Abstractions), Human-centered AI (Socially Intelligent AI)
Under Review at ICLR 2026 (First Author)
NLP Empirical ML Theoretical ML
Contribution:
I innovated my Bridged Clustering Algorithm to align unsupervised sources for cross-modal representation learning, developing method-agnostic, interpretable, and bidirectional models with theoretical and empirical strengths in large-scale NLP experiments.
AAAI Workshop 2025 (First Author)
🌟 Poster Submission to AI4Research @ AAAI 2025
Empirical ML Theoretical ML
Contribution:
I independently invented Bridged Clustering, one of the first semi-supervised learning algorithms that jointly exploits unsupervised inputs and outputs, realigning disjoint datasets towards new supervised objectives, across any pair of under-supervised modalities.
In collaboration with Stanford Biology, I experimented with my algorithm in scientific research, jointly modeling disjoint biological data collected by Dr. Wu's team.
AAAI Workshop 2025 (First Author)
🌟 Poster Submission to iRAISE @ AAAI 2025, Published at PMLR
Socially Intelligent AI Large Language Models
Contribution:
I invented an Analogy Tutor system where LLM tutors teach human learners abstract concepts through cross-domain analogies, with an end-to-end concept anonymization framework and LLM-based student simulation.
NeurIPS Workshop 2023 (Co-Second Author)
🌟 SpotLight Submission to NeurIPS Workshop SoLaR
Socially Intelligent AI Large Language Models AI Alignment
Contribution:
I co-developed Social-Contract AI, a framework aligning LLM assistants with implicit societal norms such as sycophancy and altruism. I led the development of a full experimental pipeline: Constitutional AI and Verbal RL, prompt design, human and LLM-based evaluations, and large-scale data collection.
IEEE 2021 (Co-First Author)
Socially Intelligent AI AI Alignment
Contribution:
I co-led this study of AI technology's social integration process into families and impacts on children’s psycho-social formation, by studying how AI co-create symbolically meaningful interactions with children. This work is one of the first formulations of symbolic interactionism in Human-centered AI.
CS 224N Project (Co-first Author)
Category:
NLP Large Language Models
Contribution:
Integrating in-context learning, fine-tuning with distillation, and Verbal Reinforcement Learning for semantic grouping tasks, achieving near-human-level performance and a significant improvement of 38.23% compared to baseline clustering methods.